TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Large Language Models are Zero-Shot Reasoners

Large Language Models are Zero-Shot Reasoners

Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa

2022-05-24Few-Shot LearningMath Word Problem SolvingCommon Sense ReasoningLogical ReasoningGSM8KArithmetic Reasoning
PaperPDFCodeCodeCode(official)Code

Abstract

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.

Results

TaskDatasetMetricValueModel
Question AnsweringSVAMPExecution Accuracy62.1PaLM (zero-shot, CoT)
Question AnsweringSVAMPExecution Accuracy58.8PaLM (zero-shot)
Common Sense ReasoningReCoRDF190.2GPT-3 175B (one-shot)
Math Word Problem SolvingSVAMPExecution Accuracy62.1PaLM (zero-shot, CoT)
Math Word Problem SolvingSVAMPExecution Accuracy58.8PaLM (zero-shot)
Mathematical Question AnsweringSVAMPExecution Accuracy62.1PaLM (zero-shot, CoT)
Mathematical Question AnsweringSVAMPExecution Accuracy58.8PaLM (zero-shot)
Mathematical ReasoningSVAMPExecution Accuracy62.1PaLM (zero-shot, CoT)
Mathematical ReasoningSVAMPExecution Accuracy58.8PaLM (zero-shot)
Arithmetic ReasoningMultiArithAccuracy78.7Text-davinci-002 (175B)(zero-shot-cot)
Arithmetic ReasoningMultiArithAccuracy17.7Text-davinci-002 (175B) (zero-shot)
Arithmetic ReasoningGSM8KAccuracy58.1PaLM-540B (few-Shot-cot)
Arithmetic ReasoningGSM8KParameters (Billion)540PaLM-540B (few-Shot-cot)
Arithmetic ReasoningGSM8KAccuracy55Finetuned GPT-3 175B + verifier
Arithmetic ReasoningGSM8KParameters (Billion)175Finetuned GPT-3 175B + verifier
Arithmetic ReasoningGSM8KAccuracy51.5Text-davinci-002-175B (zero-plus-few-Shot-cot (8 samples))
Arithmetic ReasoningGSM8KParameters (Billion)175Text-davinci-002-175B (zero-plus-few-Shot-cot (8 samples))
Arithmetic ReasoningGSM8KAccuracy41.3text-davinci-002 175B (2-shot, CoT)
Arithmetic ReasoningGSM8KParameters (Billion)175text-davinci-002 175B (2-shot, CoT)
Arithmetic ReasoningGSM8KAccuracy40.7text-davinci-002 175B (0-shot, CoT)
Arithmetic ReasoningGSM8KParameters (Billion)175text-davinci-002 175B (0-shot, CoT)
Arithmetic ReasoningGSM8KAccuracy17.9PaLM 540B (few-shot)
Arithmetic ReasoningGSM8KParameters (Billion)540PaLM 540B (few-shot)
Arithmetic ReasoningGSM8KAccuracy10.4Text-davinci-002-175B (0-shot)
Arithmetic ReasoningGSM8KParameters (Billion)175Text-davinci-002-175B (0-shot)

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17GEMMAS: Graph-based Evaluation Metrics for Multi Agent Systems2025-07-17DAC: A Dynamic Attention-aware Approach for Task-Agnostic Prompt Compression2025-07-16KisMATH: Do LLMs Have Knowledge of Implicit Structures in Mathematical Reasoning?2025-07-15DCR: Quantifying Data Contamination in LLMs Evaluation2025-07-15Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis2025-07-10