Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Tuo Zhao
Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Natural Language Inference | AX | Accuracy | 53.1 | T5 |
| Natural Language Inference | SciTail | Dev Accuracy | 96.1 | MT-DNN-SMART_100%ofTrainingData |
| Natural Language Inference | SciTail | Dev Accuracy | 91.3 | MT-DNN-SMART_10%ofTrainingData |
| Natural Language Inference | SciTail | Dev Accuracy | 88.6 | MT-DNN-SMART_1%ofTrainingData |
| Natural Language Inference | SciTail | Dev Accuracy | 82.3 | MT-DNN-SMART_0.1%ofTrainingData |
| Natural Language Inference | SciTail | % Dev Accuracy | 96.6 | MT-DNN-SMARTLARGEv0 |
| Natural Language Inference | SciTail | % Test Accuracy | 95.2 | MT-DNN-SMARTLARGEv0 |
| Natural Language Inference | MNLI + SNLI + ANLI + FEVER | % Dev Accuracy | 57.1 | SMARTRoBERTa-LARGE |
| Natural Language Inference | MNLI + SNLI + ANLI + FEVER | % Test Accuracy | 57.1 | SMARTRoBERTa-LARGE |
| Natural Language Inference | SNLI | % Dev Accuracy | 92.6 | MT-DNN-SMARTLARGEv0 |
| Natural Language Inference | SNLI | % Test Accuracy | 91.7 | MT-DNN-SMARTLARGEv0 |
| Natural Language Inference | SNLI | Dev Accuracy | 91.6 | MT-DNN-SMART_100%ofTrainingData |
| Natural Language Inference | SNLI | Dev Accuracy | 88.7 | MT-DNN-SMART_10%ofTrainingData |
| Natural Language Inference | SNLI | Dev Accuracy | 86 | MT-DNN-SMART_1%ofTrainingData |
| Natural Language Inference | SNLI | Dev Accuracy | 82.7 | MT-DNN-SMART_0.1%ofTrainingData |
| Natural Language Inference | MultiNLI | Matched | 92 | T5 |
| Natural Language Inference | MultiNLI | Mismatched | 91.7 | T5 |
| Natural Language Inference | MultiNLI | Accuracy | 85.7 | MT-DNN-SMARTv0 |
| Natural Language Inference | MultiNLI | Accuracy | 85.7 | MT-DNN-SMART |
| Natural Language Inference | MultiNLI | Accuracy | 85.6 | SMART+BERT-BASE |
| Natural Language Inference | MultiNLI | Dev Matched | 91.1 | SMARTRoBERTa |
| Natural Language Inference | MultiNLI | Dev Mismatched | 91.3 | SMARTRoBERTa |
| Natural Language Inference | MultiNLI | Dev Matched | 85.6 | SMART-BERT |
| Natural Language Inference | MultiNLI | Dev Mismatched | 86 | SMART-BERT |
| Semantic Textual Similarity | MRPC | F1 | 91.7 | MT-DNN-SMART |
| Semantic Textual Similarity | STS Benchmark | Pearson Correlation | 0.929 | MT-DNN-SMART |
| Semantic Textual Similarity | STS Benchmark | Spearman Correlation | 0.925 | MT-DNN-SMART |
| Semantic Textual Similarity | STS Benchmark | Dev Pearson Correlation | 92.8 | SMARTRoBERTa |
| Semantic Textual Similarity | STS Benchmark | Dev Spearman Correlation | 92.6 | SMARTRoBERTa |
| Semantic Textual Similarity | STS Benchmark | Dev Pearson Correlation | 90 | SMART-BERT |
| Semantic Textual Similarity | STS Benchmark | Dev Spearman Correlation | 89.4 | SMART-BERT |
| Semantic Textual Similarity | Quora Question Pairs | F1 | 90.7 | ALICE |
| Semantic Textual Similarity | Quora Question Pairs | Accuracy | 74.8 | FreeLB |
| Semantic Textual Similarity | Quora Question Pairs | Dev Accuracy | 92.6 | FreeLB |
| Semantic Textual Similarity | Quora Question Pairs | Dev Accuracy | 91.5 | SMART-BERT |
| Semantic Textual Similarity | Quora Question Pairs | Dev F1 | 88.5 | SMART-BERT |
| Sentiment Analysis | SST-2 Binary classification | Accuracy | 97.5 | MT-DNN-SMART |
| Sentiment Analysis | SST-2 Binary classification | Accuracy | 93.6 | MT-DNN |
| Sentiment Analysis | SST-2 Binary classification | Accuracy | 93 | SMART+BERT-BASE |
| Sentiment Analysis | SST-2 Binary classification | Dev Accuracy | 96.9 | SMARTRoBERTa |
| Sentiment Analysis | SST-2 Binary classification | Dev Accuracy | 96.1 | SMART-MT-DNN |
| Sentiment Analysis | SST-2 Binary classification | Dev Accuracy | 93 | SMART-BERT |
| Paraphrase Identification | Quora Question Pairs | F1 | 90.7 | ALICE |
| Paraphrase Identification | Quora Question Pairs | Accuracy | 74.8 | FreeLB |
| Paraphrase Identification | Quora Question Pairs | Dev Accuracy | 92.6 | FreeLB |
| Paraphrase Identification | Quora Question Pairs | Dev Accuracy | 91.5 | SMART-BERT |
| Paraphrase Identification | Quora Question Pairs | Dev F1 | 88.5 | SMART-BERT |
| Natural Language Understanding | GLUE | Average | 89.9 | MT-DNN-SMART |