Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | SQuAD1.1 dev | EM | 77.7 | DistilBERT |
| Question Answering | SQuAD1.1 dev | F1 | 85.8 | DistilBERT 66M |
| Question Answering | MultiTQ | Hits@1 | 8.3 | DistillBERT |
| Question Answering | MultiTQ | Hits@10 | 48.4 | DistillBERT |
| Natural Language Inference | WNLI | Accuracy | 44.4 | DistilBERT 66M |
| Semantic Textual Similarity | STS Benchmark | Pearson Correlation | 0.907 | DistilBERT 66M |
| Sentiment Analysis | SST-2 Binary classification | Accuracy | 91.3 | DistilBERT 66M |
| Sentiment Analysis | IMDb | Accuracy | 92.82 | DistilBERT 66M |