Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, William Fedus
Scale has opened new frontiers in natural language processing -- but at a high cost. In response, Mixture-of-Experts (MoE) and Switch Transformers have been proposed as an energy efficient path to even larger and more capable language models. But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning. Our work focuses on these issues and acts as a design guide. We conclude by scaling a sparse model to 269B parameters, with a computational cost comparable to a 32B dense encoder-decoder Transformer (Stable and Transferable Mixture-of-Experts or ST-MoE-32B). For the first time, a sparse model achieves state-of-the-art performance in transfer learning, across a diverse set of tasks including reasoning (SuperGLUE, ARC Easy, ARC Challenge), summarization (XSum, CNN-DM), closed book question answering (WebQA, Natural Questions), and adversarially constructed tasks (Winogrande, ANLI R3).
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Question Answering | COPA | Accuracy | 99.2 | ST-MoE-32B 269B (fine-tuned) |
| Question Answering | COPA | Accuracy | 91 | ST-MoE-L 4.1B (fine-tuned) |
| Question Answering | MultiRC | F1 | 89.6 | ST-MoE-32B 269B (fine-tuned) |
| Question Answering | MultiRC | F1 | 86 | ST-MoE-L 4.1B (fine-tuned) |
| Question Answering | BoolQ | Accuracy | 92.4 | ST-MoE-32B 269B (fine-tuned) |
| Question Answering | BoolQ | Accuracy | 88.6 | ST-MoE-L 4.1B (fine-tuned) |
| Common Sense Reasoning | WinoGrande | Accuracy | 96.1 | ST-MoE-32B 269B (fine-tuned) |
| Common Sense Reasoning | WinoGrande | Accuracy | 81.7 | ST-MoE-L 4.1B (fine-tuned) |
| Common Sense Reasoning | ARC (Challenge) | Accuracy | 86.5 | ST-MoE-32B 269B (fine-tuned) |
| Common Sense Reasoning | ARC (Challenge) | Accuracy | 56.9 | ST-MoE-L 4.1B (fine-tuned) |
| Common Sense Reasoning | ARC (Easy) | Accuracy | 95.2 | ST-MoE-32B 269B (fine-tuned) |
| Common Sense Reasoning | ARC (Easy) | Accuracy | 75.4 | ST-MoE-L 4.1B (fine-tuned) |
| Common Sense Reasoning | ReCoRD | EM | 95.1 | ST-MoE-32B 269B (fine-tuned) |
| Common Sense Reasoning | ReCoRD | EM | 88.9 | ST-MoE-L 4.1B (fine-tuned) |
| Word Sense Disambiguation | Words in Context | Accuracy | 77.7 | ST-MoE-32B 269B (fine-tuned) |
| Word Sense Disambiguation | Words in Context | Accuracy | 74 | ST-MoE-L 4.1B (fine-tuned) |
| Natural Language Inference | CommitmentBank | Accuracy | 98.2 | ST-MoE-L 4.1B (fine-tuned) |
| Natural Language Inference | CommitmentBank | Accuracy | 98 | ST-MoE-32B 269B (fine-tuned) |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 96.6 | ST-MoE-32B 269B (fine-tuned) |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 93.3 | ST-MoE-L 4.1B (fine-tuned) |