[PDF][PDF] BERT-A: Finetuning BERT with Adapters and Data Augmentation

S Semnani, KR Sadagopan, F Tlili - Standford University, 2019 - openreview.net
Standford University, 2019openreview.net
We tackle the contextual question answering (QA) problem on the SQuAD 2.0 dataset. Our
project has two main objectives. Firstly, we aim to build a model that achieves a reasonable
performance while keeping the number of trainable parameters to a minimum. In this regard,
we insert task-specific modules inside the pre-trained BERT model to control the flow of
information between transformer blocks. Our proposed method for fine-tuning BERT
achieves comparable performance to fine-tuning all BERT parameters while only training …
Abstract
We tackle the contextual question answering (QA) problem on the SQuAD 2.0 dataset. Our project has two main objectives. Firstly, we aim to build a model that achieves a reasonable performance while keeping the number of trainable parameters to a minimum. In this regard, we insert task-specific modules inside the pre-trained BERT model to control the flow of information between transformer blocks. Our proposed method for fine-tuning BERT achieves comparable performance to fine-tuning all BERT parameters while only training 0.57% of them. Secondly, we use our findings in the previous task to achieve an EM score of 78.36 and an F1 score of 81.44 on the test set (ranked 3rd on the PCE test leaderboard).
openreview.net
以上显示的是最相近的搜索结果。 查看全部搜索结果