TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Llama 2: Open Foundation and Fine-Tuned Chat Models

Llama 2: Open Foundation and Fine-Tuned Chat Models

Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom

2023-07-18Question AnsweringMath Word Problem SolvingMulti-task Language UnderstandingSentence CompletionArithmetic ReasoningCode GenerationMultiple Choice Question Answering (MCQA)
PaperPDFCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.

Results

TaskDatasetMetricValueModel
Transfer LearningMMLAverage (%)62.6LLaMA 2 34B (5-shot)
Transfer LearningMMLAverage (%)54.8LLaMA 2 13B (5-shot)
Transfer LearningMMLAverage (%)45.3LLaMA 2 7B (5-shot)
Question AnsweringMultiTQHits@118.5LLaMA2
Question AnsweringNatural QuestionsEM33LLaMA 2 70B (one-shot)
Question AnsweringPIQAAccuracy82.8LLaMA 2 70B (0-shot)
Question AnsweringPIQAAccuracy81.9LLaMA 2 34B (0-shot)
Question AnsweringPIQAAccuracy80.5LLaMA 2 13B (0-shot)
Question AnsweringPIQAAccuracy78.8LLaMA 2 7B (0-shot)
Question AnsweringUniProtQABLEU-20.019Llama2-7B-chat
Question AnsweringUniProtQABLEU-40.002Llama2-7B-chat
Question AnsweringUniProtQAMEATOR0.052Llama2-7B-chat
Question AnsweringUniProtQAROUGE-10.103Llama2-7B-chat
Question AnsweringUniProtQAROUGE-20.06Llama2-7B-chat
Question AnsweringUniProtQAROUGE-L0.009Llama2-7B-chat
Question AnsweringBoolQAccuracy85LLaMA 2 70B (0-shot)
Question AnsweringBoolQAccuracy83.7LLaMA 2 34B (0-shot)
Question AnsweringBoolQAccuracy81.7LLaMA 2 13B (0-shot)
Question AnsweringBoolQAccuracy77.4LLaMA 2 7B (zero-shot)
Question AnsweringTriviaQAEM85LLaMA 2 70B (one-shot)
Question AnsweringPubChemQABLEU-20.075Llama2-7B-chat
Question AnsweringPubChemQABLEU-40.009Llama2-7B-chat
Question AnsweringPubChemQAMEATOR0.149Llama2-7B-chat
Question AnsweringPubChemQAROUGE-10.184Llama2-7B-chat
Question AnsweringPubChemQAROUGE-20.043Llama2-7B-chat
Question AnsweringPubChemQAROUGE-L0.142Llama2-7B-chat
Question AnsweringMMLU (Professional medicine)Accuracy43.38Llama2-7B
Question AnsweringMMLU (Professional medicine)Accuracy40.07Llama2-7B-chat
Question AnsweringMAWPSAccuracy (%)82.4LLaMA 2-Chat
Question AnsweringSVAMPExecution Accuracy69.2LLaMA 2-Chat
Code GenerationMBPPAccuracy45Llama 2 70B (zero-shot)
Code GenerationMBPPAccuracy33Llama 2 34B (0-shot)
Code GenerationMBPPAccuracy30.6Llama 2 13B (0-shot)
Code GenerationMBPPAccuracy20.8Llama 2 7B (0-shot)
Math Word Problem SolvingMAWPSAccuracy (%)82.4LLaMA 2-Chat
Math Word Problem SolvingSVAMPExecution Accuracy69.2LLaMA 2-Chat
Mathematical Question AnsweringMAWPSAccuracy (%)82.4LLaMA 2-Chat
Mathematical Question AnsweringSVAMPExecution Accuracy69.2LLaMA 2-Chat
Multi-Task LearningMMLAverage (%)62.6LLaMA 2 34B (5-shot)
Multi-Task LearningMMLAverage (%)54.8LLaMA 2 13B (5-shot)
Multi-Task LearningMMLAverage (%)45.3LLaMA 2 7B (5-shot)
Mathematical ReasoningMAWPSAccuracy (%)82.4LLaMA 2-Chat
Mathematical ReasoningSVAMPExecution Accuracy69.2LLaMA 2-Chat
Sentence CompletionHellaSwagAccuracy85.3LLaMA 2 70B (0-shot)
Sentence CompletionHellaSwagAccuracy83.3LLaMA 2 34B (0-shot)
Sentence CompletionHellaSwagAccuracy80.7LLaMA 2 13B (0-shot)
Sentence CompletionHellaSwagAccuracy77.2LLaMA 2 7B (0-shot)
Arithmetic ReasoningGSM8KAccuracy56.8LLaMA 2 70B (on-shot)
Arithmetic ReasoningGSM8KParameters (Billion)70LLaMA 2 70B (on-shot)

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16