TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PaLM 2 Technical Report

PaLM 2 Technical Report

Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, Yaguang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, ZiRui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu

2023-05-17Question AnsweringMulti-task Language UnderstandingSentence CompletionCoreference ResolutionNatural Language InferenceCommon Sense ReasoningCode GenerationLanguage Modelling
PaperPDFCode

Abstract

We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report.

Results

TaskDatasetMetricValueModel
Machine TranslationFRMT (Chinese - Mainland)BLEURT74.4PaLM 2
Machine TranslationFRMT (Chinese - Mainland)BLEURT72.3Google Translate
Machine TranslationFRMT (Chinese - Mainland)BLEURT70.3PaLM
Machine TranslationFRMT (Portuguese - Portugal)BLEURT78.3PaLM 2
Machine TranslationFRMT (Portuguese - Portugal)BLEURT76.1PaLM
Machine TranslationFRMT (Portuguese - Portugal)BLEURT75.3Google Translate
Machine TranslationFRMT (Portuguese - Brazil)BLEURT81.1PaLM 2
Machine TranslationFRMT (Portuguese - Brazil)BLEURT80.2Google Translate
Machine TranslationFRMT (Portuguese - Brazil)BLEURT78.5PaLM
Machine TranslationFRMT (Chinese - Taiwan)BLEURT72PaLM 2
Machine TranslationFRMT (Chinese - Taiwan)BLEURT68.6PaLM
Machine TranslationFRMT (Chinese - Taiwan)BLEURT68.5Google Translate
Transfer LearningMGSMAverage (%)87PaLM 2 (few-shot, k=8, SC)
Transfer LearningMGSMAverage (%)72.2PaLM 2 (8-shot, CoT)
Question AnsweringCOPAAccuracy96PaLM 2-L (1-shot)
Question AnsweringCOPAAccuracy90PaLM 2-M (1-shot)
Question AnsweringCOPAAccuracy89PaLM 2-S (1-shot)
Question AnsweringNatural QuestionsEM37.5PaLM 2-L (one-shot)
Question AnsweringNatural QuestionsEM32PaLM 2-M (one-shot)
Question AnsweringNatural QuestionsEM25.3PaLM 2-S (one-shot)
Question AnsweringStory ClozeAccuracy87.4PaLM 2-L (one-shot)
Question AnsweringStory ClozeAccuracy86.7PaLM 2-M (one-shot)
Question AnsweringStory ClozeAccuracy85.6PaLM 2-S (one-shot)
Question AnsweringStrategyQAAccuracy90.4PaLM 2 (few-shot, CoT, SC)
Question AnsweringMultiRCF188.2PaLM 2-L (one-shot)
Question AnsweringMultiRCF184.1PaLM 2-M (one-shot)
Question AnsweringMultiRCF184PaLM 2-S (one-shot)
Question AnsweringWebQuestionsEM28.2PaLM 2-L (one-shot)
Question AnsweringWebQuestionsEM26.9PaLM 2-M (one-shot)
Question AnsweringWebQuestionsEM21.8PaLM 2-S (one-shot)
Question AnsweringPIQAAccuracy85PaLM 2-L (1-shot)
Question AnsweringPIQAAccuracy83.2PaLM 2-M (1-shot)
Question AnsweringPIQAAccuracy82.2PaLM 2-S (1-shot)
Question AnsweringBoolQAccuracy90.9PaLM 2-L (1-shot)
Question AnsweringBoolQAccuracy88.6PaLM 2-M (1-shot)
Question AnsweringBoolQAccuracy88.1PaLM 2-S (1-shot)
Question AnsweringDROP TestF185PaLM 2 (few-shot)
Question AnsweringTriviaQAEM86.1PaLM 2-L (one-shot)
Question AnsweringTriviaQAEM81.7PaLM 2-M (one-shot)
Question AnsweringTriviaQAEM75.2PaLM 2-S (one-shot)
Question AnsweringOpenBookQAAccuracy58.5PaLM 2-L (1-shot)
Question AnsweringOpenBookQAAccuracy57.4PaLM 2-S (1-shot)
Question AnsweringOpenBookQAAccuracy56.2PaLM 2-M (1-shot)
Question AnsweringBIG-bench (Movie Recommendation)Accuracy94.4PaLM 2 (few-shot, k=3, CoT)
Question AnsweringBIG-bench (Movie Recommendation)Accuracy93.6PaLM 2 (few-shot, k=3, Direct)
Question AnsweringBIG-bench (Navigate)Accuracy91.2PaLM 2 (few-shot, k=3, CoT)
Question AnsweringBIG-bench (Navigate)Accuracy68.8PaLM 2 (few-shot, k=3, Direct)
Question AnsweringBIG-bench (Ruin Names)Accuracy90PaLM 2 (few-shot, k=3, Direct)
Question AnsweringBIG-bench (Ruin Names)Accuracy83.6PaLM 2 (few-shot, k=3, CoT)
Question AnsweringBIG-bench (Hyperbaton)Accuracy84.8PaLM 2 (few-shot, k=3, Direct)
Question AnsweringBIG-bench (Hyperbaton)Accuracy82.4PaLM 2 (few-shot, k=3, CoT)
Question AnsweringTyDiQA-GoldPF173.6PaLM 2-L (one-shot)
Question AnsweringTyDiQA-GoldPF173.3PaLM 2-S (one-shot)
Question AnsweringTyDiQA-GoldPF173.3PaLM 2-M (one-shot)
Question AnsweringMATHAccuracy48.8PaLM 2 (few-shot, k=4, SC)
Question AnsweringMATHAccuracy34.3PaLM 2 (few-shot, k=4, CoT)
Code GenerationMBPPAccuracy50PaLM 2-S* (few-shot)
Common Sense ReasoningBIG-bench (Causal Judgment)Accuracy62PaLM 2 (few-shot, k=3, Direct)
Common Sense ReasoningBIG-bench (Causal Judgment)Accuracy58.8PaLM 2 (few-shot, k=3, CoT)
Common Sense ReasoningBIG-bench (Disambiguation QA)Accuracy78.8PaLM 2 (few-shot, k=3, Direct)
Common Sense ReasoningBIG-bench (Disambiguation QA)Accuracy77.6PaLM 2 (few-shot, k=3, CoT)
Common Sense ReasoningWinoGrandeAccuracy83PaLM 2-L (1-shot)
Common Sense ReasoningWinoGrandeAccuracy79.2PaLM 2-M (1-shot)
Common Sense ReasoningWinoGrandeAccuracy77.9PaLM 2-S (1-shot)
Common Sense ReasoningARC (Challenge)Accuracy95.1PaLM 2 (few-shot, CoT, SC)
Common Sense ReasoningARC (Challenge)Accuracy69.2PaLM 2-L (1-shot)
Common Sense ReasoningARC (Challenge)Accuracy64.9PaLM 2-M (1-shot)
Common Sense ReasoningARC (Challenge)Accuracy59.6PaLM 2-S (1-shot)
Common Sense ReasoningBIG-bench (Sports Understanding)Accuracy98PaLM 2(few-shot, k=3, CoT)
Common Sense ReasoningBIG-bench (Sports Understanding)Accuracy90.8PaLM 2 (few-shot, k=3, Direct)
Common Sense ReasoningARC (Easy)Accuracy89.7PaLM 2-L (1-shot)
Common Sense ReasoningARC (Easy)Accuracy88PaLM 2-M (1-shot)
Common Sense ReasoningARC (Easy)Accuracy85.6PaLM 2-S (1-shot)
Common Sense ReasoningBIG-bench (Date Understanding)Accuracy91.2PaLM 2 (few-shot, k=3, CoT)
Common Sense ReasoningBIG-bench (Date Understanding)Accuracy74PaLM 2 (few-shot, k=3, Direct)
Common Sense ReasoningCommonsenseQAAccuracy90.4PaLM 2 (few‑shot, CoT, SC)
Common Sense ReasoningReCoRDF193.8PaLM 2-L (one-shot)
Common Sense ReasoningReCoRDF192.4PaLM 2-M (one-shot)
Common Sense ReasoningReCoRDF192.1PaLM 2-S (one-shot)
Word Sense DisambiguationWords in ContextAccuracy66.8PaLM 2-L (one-shot)
Word Sense DisambiguationWords in ContextAccuracy52PaLM 2-M (one-shot)
Word Sense DisambiguationWords in ContextAccuracy50.6PaLM 2-S (one-shot)
Natural Language InferenceANLI testA173.1PaLM 2-L (one-shot)
Natural Language InferenceANLI testA263.4PaLM 2-L (one-shot)
Natural Language InferenceANLI testA367.1PaLM 2-L (one-shot)
Natural Language InferenceANLI testA158.1PaLM 2-M (one-shot)
Natural Language InferenceANLI testA249.5PaLM 2-M (one-shot)
Natural Language InferenceANLI testA354.5PaLM 2-M (one-shot)
Natural Language InferenceANLI testA153.1PaLM 2-S (one-shot)
Natural Language InferenceANLI testA248.8PaLM 2-S (one-shot)
Natural Language InferenceANLI testA353.2PaLM 2-S (one-shot)
Natural Language InferenceCommitmentBankAccuracy87.5PaLM 2-L (one-shot)
Natural Language InferenceCommitmentBankAccuracy82.1PaLM 2-S (one-shot)
Natural Language InferenceCommitmentBankAccuracy80.4PaLM 2-M (one-shot)
Language ModellingLAMBADAAccuracy86.9PaLM 2-L (one-shot)
Language ModellingLAMBADAAccuracy83.7PaLM 2-M (one-shot)
Language ModellingLAMBADAAccuracy80.7PaLM 2-S (one-shot)
Sarcasm DetectionBIG-bench (SNARKS)Accuracy84.8PaLM 2(few-shot, k=3, CoT)
Sarcasm DetectionBIG-bench (SNARKS)Accuracy78.7PaLM 2 (few-shot, k=3, Direct)
Cross-LingualXCOPAAccuracy94.4PaLM 2 (few-shot)
Coreference ResolutionWinograd Schema ChallengeAccuracy88.1PaLM 2-M (1-shot)
Coreference ResolutionWinograd Schema ChallengeAccuracy86.9PaLM 2-L (1-shot)
Coreference ResolutionWinograd Schema ChallengeAccuracy84.6PaLM 2-S (1-shot)
Text SummarizationX-SumROUGE-223.2PaLM 2-L (one-shot)
Text SummarizationX-SumROUGE-217.2PaLM 2-M (one-shot)
Text SummarizationX-SumROUGE-216.9PaLM 2-S (one-shot)
Text ClassificationCivil CommentsAUROC0.8535PaLM 2 (few-shot, k=10)
Text ClassificationCivil CommentsAUROC0.7596PaLM 2 (zero-shot)
Math Word Problem SolvingMATHAccuracy48.8PaLM 2 (few-shot, k=4, SC)
Math Word Problem SolvingMATHAccuracy34.3PaLM 2 (few-shot, k=4, CoT)
Cross-Lingual TransferXCOPAAccuracy94.4PaLM 2 (few-shot)
Mathematical Question AnsweringMATHAccuracy48.8PaLM 2 (few-shot, k=4, SC)
Mathematical Question AnsweringMATHAccuracy34.3PaLM 2 (few-shot, k=4, CoT)
Multi-Task LearningMGSMAverage (%)87PaLM 2 (few-shot, k=8, SC)
Multi-Task LearningMGSMAverage (%)72.2PaLM 2 (8-shot, CoT)
Mathematical ReasoningMATHAccuracy48.8PaLM 2 (few-shot, k=4, SC)
Mathematical ReasoningMATHAccuracy34.3PaLM 2 (few-shot, k=4, CoT)
ClassificationCivil CommentsAUROC0.8535PaLM 2 (few-shot, k=10)
ClassificationCivil CommentsAUROC0.7596PaLM 2 (zero-shot)
Sentence CompletionHellaSwagAccuracy87.4PaLM 2-L (1-shot)
Sentence CompletionHellaSwagAccuracy86.7PaLM 2-M (1-shot)
Sentence CompletionHellaSwagAccuracy85.6PaLM 2-S (1-shot)
Arithmetic ReasoningGSM8KAccuracy91PaLM 2 (few-shot, k=8, SC)
Arithmetic ReasoningGSM8KAccuracy80.7PaLM 2 (few-shot, k=8, CoT)
Logical ReasoningBIG-bench (Penguins In A Table)Accuracy84.9PaLM 2 (few-shot, k=3, CoT)
Logical ReasoningBIG-bench (Penguins In A Table)Accuracy65.8PaLM 2 (few-shot, k=3, Direct)
Logical ReasoningBIG-bench (Logic Grid Puzzle)Accuracy42.4PaLM-540B (few-shot, k=5)
Logical ReasoningBIG-bench (Logic Grid Puzzle)Accuracy36.5PaLM-62B (few-shot, k=5)
Logical ReasoningBIG-bench (Temporal Sequences)Accuracy100PaLM 2 (few-shot, k=3, CoT)
Logical ReasoningBIG-bench (Temporal Sequences)Accuracy96.4PaLM 2 (few-shot, k=3, Direct)
Logical ReasoningBIG-bench (Formal Fallacies Syllogisms Negation)Accuracy64.8PaLM 2 (few-shot, k=3, Direct)
Logical ReasoningBIG-bench (Formal Fallacies Syllogisms Negation)Accuracy57.2PaLM 2 (few-shot, k=3, CoT)
Logical ReasoningBIG-bench (Reasoning About Colored Objects)Accuracy91.2PaLM 2 (few-shot, k=3, CoT)
Logical ReasoningBIG-bench (Reasoning About Colored Objects)Accuracy61.2PaLM 2 (few-shot, k=3, Direct)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17