Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, Laurent SIfre
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over \nummodels language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, \chinchilla, that uses the same compute budget as \gopher but with 70B parameters and 4$\times$ more more data. \chinchilla uniformly and significantly outperforms \Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that \chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, \chinchilla reaches a state-of-the-art average accuracy of 67.5\% on the MMLU benchmark, greater than a 7\% improvement over \gopher.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Reading Comprehension | BIG-bench | Accuracy | 92.8 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 49.4 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 52.6 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 77.4 | Chinchilla-70B (zero-shot) |
| Reading Comprehension | BIG-bench | Accuracy | 75 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 82.4 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 69 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 63.3 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 53.1 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 54.5 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 78 | Chinchilla-70B (few-shot, k=5) |
| Reading Comprehension | BIG-bench | Accuracy | 94 | Chinchilla-70B (few-shot, k=5) |
| Transfer Learning | MML | Average (%) | 67.5 | chatgpt/gpt3.5(20B) |
| Question Answering | SIQA | Accuracy | 51.3 | Chinchilla (zero-shot) |
| Question Answering | Natural Questions | EM | 35.5 | Chinchilla (few-shot, k=64) |
| Question Answering | PIQA | Accuracy | 81.8 | Chinchilla 70B (0-shot) |
| Question Answering | BoolQ | Accuracy | 83.7 | Chinchilla 70B (0-shot) |
| Question Answering | BIG-bench (Novel Concepts) | Accuracy | 65.6 | Chinchilla-70B (few-shot, k=5) |
| Question Answering | BIG-bench (Movie Recommendation) | Accuracy | 75.6 | Chinchilla-70B (few-shot, k=5) |
| Question Answering | BIG-bench (Navigate) | Accuracy | 52.6 | Chinchilla-70B (few-shot, k=5) |
| Question Answering | BIG-bench (Ruin Names) | Accuracy | 47.1 | Chinchilla-70B (few-shot, k=5) |
| Question Answering | BIG-bench (Hyperbaton) | Accuracy | 54.2 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench (Causal Judgment) | Accuracy | 57.4 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench (Disambiguation QA) | Accuracy | 54.7 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | WinoGrande | Accuracy | 74.9 | Chinchilla 70B (0-shot) |
| Common Sense Reasoning | BIG-bench (Sports Understanding) | Accuracy | 71 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench (Winowhy) | Accuracy | 62.5 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench (Known Unknowns) | Accuracy | 65.2 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench (Date Understanding) | Accuracy | 52.3 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench (Logical Sequence) | Accuracy | 64.1 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 85.7 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 13.1 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 67.7 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 68.8 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 47.6 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 75 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 73 | Chinchilla-70B (few-shot, k=5) |
| Common Sense Reasoning | BIG-bench | Accuracy | 60.3 | Chinchilla-70B (few-shot, k=5) |
| Word Sense Disambiguation | BIG-bench (Anachronisms) | Accuracy | 69.1 | Chinchilla-70B (few-shot, k=5) |
| Language Modelling | LAMBADA | Accuracy | 77.7 | Chinchilla (Zero-Shot) |
| Sarcasm Detection | BIG-bench (SNARKS) | Accuracy | 58.6 | Chinchilla-70B (few-shot, k=5) |
| Multi-Task Learning | MML | Average (%) | 67.5 | chatgpt/gpt3.5(20B) |
| Mathematical Reasoning | BIG-bench | Accuracy | 47.3 | Chinchilla-70B (few-shot, k=5) |
| Analogical Similarity | BIG-bench | Accuracy | 38.1 | Chinchilla-70B (few-shot, k=5) |
| Identify Odd Metapor | BIG-bench | Accuracy | 68.8 | Chinchilla-70B (few-shot, k=5) |
| Odd One Out | BIG-bench | Accuracy | 70.9 | Chinchilla-70B (few-shot, k=5) |
| Sentence Completion | HellaSwag | Accuracy | 80.8 | Chinchilla 70B (0-shot) |
| Emotional Intelligence | BIG-bench | Accuracy | 66.2 | Chinchilla-70B (few-shot, k=5) |
| Ethics | BIG-bench | Accuracy | 57.3 | Chinchilla-70B (few-shot, k=5) |
| Fact Checking | BIG-bench | Accuracy | 65.3 | Chinchilla-70B (few-shot, k=5) |
| Fact Checking | BIG-bench | Accuracy | 71.7 | Chinchilla-70B (few-shot, k=5) |
| General Knowledge | BIG-bench | Accuracy | 94.3 | Chinchilla-70B (few-shot, k=5) |
| General Knowledge | BIG-bench | Accuracy | 87 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (Penguins In A Table) | Accuracy | 48.7 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (Logic Grid Puzzle) | Accuracy | 44 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (Temporal Sequences) | Accuracy | 32 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (Formal Fallacies Syllogisms Negation) | Accuracy | 52.1 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (Reasoning About Colored Objects) | Accuracy | 59.7 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (Logical Fallacy Detection) | Accuracy | 72.1 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench (StrategyQA) | Accuracy | 68.3 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 79 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 60.6 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 93.1 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 67.1 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 94 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 17.6 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 56.2 | Chinchilla-70B (few-shot, k=5) |
| Logical Reasoning | BIG-bench | Accuracy | 49.9 | Chinchilla-70B (few-shot, k=5) |
| Human Organs Senses Multiple Choice | BIG-bench | Accuracy | 85.7 | Chinchilla-70B (few-shot, k=5) |
| Intent Recognition | BIG-bench | Accuracy | 92.8 | Chinchilla-70B (few-shot, k=5) |