TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Code Llama: Open Foundation Models for Code

Code Llama: Open Foundation Models for Code

Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve

2023-08-24Instruction Following16kCode GenerationHumanEval
PaperPDFCodeCode(official)

Abstract

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.

Results

TaskDatasetMetricValueModel
Code GenerationMBPPAccuracy65.5Code Llama - Python 70B (3-shot)
Code GenerationMBPPAccuracy62.4Code Llama 70B (3-shot)
Code GenerationMBPPAccuracy62.2Code Llama - Instruct 70B (3-shot)
Code GenerationMBPPAccuracy61.2Unnatural Code Llama 34B (3-shot)
Code GenerationMBPPAccuracy57Code Llama - Instruct 34B (3-shot)
Code GenerationMBPPAccuracy56.2Code Llama - Python 34B (3-shot)
Code GenerationMBPPAccuracy55Code Llama 34B (3-shot)
Code GenerationMBPPAccuracy52.2GPT-3.5 Turbo
Code GenerationMBPPAccuracy49.4Code Llama - Instruct 13B (3-shot)
Code GenerationMBPPAccuracy49Code Llama - Python 13B (3-shot)
Code GenerationMBPPAccuracy47.6Code Llama - Python 7B (3-shot)
Code GenerationMBPPAccuracy47Code Llama 13B (3-shot)
Code GenerationMBPPAccuracy44.4Code Llama - Instruct 7B (3-shot)
Code GenerationMBPPAccuracy41.4Code Llama 7B (3-shot)

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks2025-07-16Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16How Many Instructions Can LLMs Follow at Once?2025-07-15DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering2025-07-15The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15