Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Transfer Learning | BBH-alg | Average (%) | 66.5 | Flan-PaLM 540B (3-shot, fine-tuned, CoT + SC) |
| Transfer Learning | BBH-alg | Average (%) | 62.2 | PaLM 540B (CoT + self-consistency) |
| Transfer Learning | BBH-alg | Average (%) | 61.3 | Flan-PaLM 540B (3-shot, fine-tuned, CoT) |
| Transfer Learning | BBH-alg | Average (%) | 57.6 | PaLM 540B (CoT) |
| Transfer Learning | BBH-alg | Average (%) | 48.2 | Flan-PaLM 540B (3-shot, fine-tuned) |
| Transfer Learning | BBH-alg | Average (%) | 38.3 | PaLM 540B |
| Transfer Learning | MML | Average (%) | 73.5 | llama 2(65b) |
| Transfer Learning | MML | Average (%) | 59.5 | GPT-3 Davinci 175B (CoT) |
| Transfer Learning | MML | Average (%) | 45.5 | Flan-T5-XL 3B (CoT) |
| Transfer Learning | MML | Average (%) | 45.1 | Flan-T5-Large 780M |
| Transfer Learning | MML | Average (%) | 40.5 | Flan-T5-Large 780M (CoT) |
| Transfer Learning | MML | Average (%) | 39.7 | GPT-3 Davinci 175B (5-shot) |
| Transfer Learning | MML | Average (%) | 35.9 | Flan-T5-Base 250M |
| Transfer Learning | MML | Average (%) | 33.7 | Flan-T5-Base 250M (CoT) |
| Transfer Learning | MML | Average (%) | 28.7 | Flan-T5-Small 80M |
| Transfer Learning | MGSM | Average (%) | 72 | Flan-PaLM 540B (8-shot, fine-tuned, CoT + SC) |
| Transfer Learning | MGSM | Average (%) | 60.4 | Flan-U-PaLM 540B (CoT) |
| Transfer Learning | MGSM | Average (%) | 57 | Flan-PaLM 540B (8-shot, fine-tuned, CoT) |
| Transfer Learning | MGSM | Average (%) | 36 | text-davinci-003 |
| Transfer Learning | MGSM | Average (%) | 35 | code-davinci-002 |
| Transfer Learning | MGSM | Average (%) | 23.7 | text-davinci-002 |
| Transfer Learning | MGSM | Average (%) | 21.2 | Flan-PaLM 540B (8-shot, fine-tuned) |
| Transfer Learning | MGSM | Average (%) | 5.7 | GPT-3 Davinci 175B |
| Transfer Learning | BBH-nlp | Average (%) | 78.4 | Flan-PaLM 540B (3-shot, fine-tuned, CoT + SC) |
| Transfer Learning | BBH-nlp | Average (%) | 78.2 | PaLM 540B (CoT + self-consistency) |
| Transfer Learning | BBH-nlp | Average (%) | 72.4 | Flan-PaLM 540B (3-shot, fine-tuned, CoT) |
| Transfer Learning | BBH-nlp | Average (%) | 71.2 | PaLM 540B (CoT) |
| Transfer Learning | BBH-nlp | Average (%) | 70 | Flan-PaLM 540B (5-shot, finetuned) |
| Transfer Learning | BBH-nlp | Average (%) | 62.7 | PaLM 540B |
| Question Answering | TyDiQA-GoldP | EM | 68.3 | Flan-U-PaLM 540B (direct-prompting) |
| Question Answering | TyDiQA-GoldP | EM | 67.8 | Flan-PaLM 540B (direct-prompting) |
| Coreference Resolution | Winograd Schema Challenge | Accuracy | 89.82 | Flan-T5 XXL (zero -shot) |
| Multi-Task Learning | BBH-alg | Average (%) | 66.5 | Flan-PaLM 540B (3-shot, fine-tuned, CoT + SC) |
| Multi-Task Learning | BBH-alg | Average (%) | 62.2 | PaLM 540B (CoT + self-consistency) |
| Multi-Task Learning | BBH-alg | Average (%) | 61.3 | Flan-PaLM 540B (3-shot, fine-tuned, CoT) |
| Multi-Task Learning | BBH-alg | Average (%) | 57.6 | PaLM 540B (CoT) |
| Multi-Task Learning | BBH-alg | Average (%) | 48.2 | Flan-PaLM 540B (3-shot, fine-tuned) |
| Multi-Task Learning | BBH-alg | Average (%) | 38.3 | PaLM 540B |
| Multi-Task Learning | MML | Average (%) | 73.5 | llama 2(65b) |
| Multi-Task Learning | MML | Average (%) | 59.5 | GPT-3 Davinci 175B (CoT) |
| Multi-Task Learning | MML | Average (%) | 45.5 | Flan-T5-XL 3B (CoT) |
| Multi-Task Learning | MML | Average (%) | 45.1 | Flan-T5-Large 780M |
| Multi-Task Learning | MML | Average (%) | 40.5 | Flan-T5-Large 780M (CoT) |
| Multi-Task Learning | MML | Average (%) | 39.7 | GPT-3 Davinci 175B (5-shot) |
| Multi-Task Learning | MML | Average (%) | 35.9 | Flan-T5-Base 250M |
| Multi-Task Learning | MML | Average (%) | 33.7 | Flan-T5-Base 250M (CoT) |
| Multi-Task Learning | MML | Average (%) | 28.7 | Flan-T5-Small 80M |
| Multi-Task Learning | MGSM | Average (%) | 72 | Flan-PaLM 540B (8-shot, fine-tuned, CoT + SC) |
| Multi-Task Learning | MGSM | Average (%) | 60.4 | Flan-U-PaLM 540B (CoT) |
| Multi-Task Learning | MGSM | Average (%) | 57 | Flan-PaLM 540B (8-shot, fine-tuned, CoT) |
| Multi-Task Learning | MGSM | Average (%) | 36 | text-davinci-003 |
| Multi-Task Learning | MGSM | Average (%) | 35 | code-davinci-002 |
| Multi-Task Learning | MGSM | Average (%) | 23.7 | text-davinci-002 |
| Multi-Task Learning | MGSM | Average (%) | 21.2 | Flan-PaLM 540B (8-shot, fine-tuned) |
| Multi-Task Learning | MGSM | Average (%) | 5.7 | GPT-3 Davinci 175B |
| Multi-Task Learning | BBH-nlp | Average (%) | 78.4 | Flan-PaLM 540B (3-shot, fine-tuned, CoT + SC) |
| Multi-Task Learning | BBH-nlp | Average (%) | 78.2 | PaLM 540B (CoT + self-consistency) |
| Multi-Task Learning | BBH-nlp | Average (%) | 72.4 | Flan-PaLM 540B (3-shot, fine-tuned, CoT) |
| Multi-Task Learning | BBH-nlp | Average (%) | 71.2 | PaLM 540B (CoT) |
| Multi-Task Learning | BBH-nlp | Average (%) | 70 | Flan-PaLM 540B (5-shot, finetuned) |
| Multi-Task Learning | BBH-nlp | Average (%) | 62.7 | PaLM 540B |