TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/The Falcon Series of Open Language Models

The Falcon Series of Open Language Models

Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, Daniele Mazzotta, Badreddine Noune, Baptiste Pannier, Guilherme Penedo

2023-11-28Multi-task Language UnderstandingSentence Completion
PaperPDF

Abstract

We introduce the Falcon series: 7B, 40B, and 180B parameters causal decoder-only models trained on a diverse high-quality corpora predominantly assembled from web data. The largest model, Falcon-180B, has been trained on over 3.5 trillion tokens of text--the largest openly documented pretraining run. Falcon-180B significantly outperforms models such as PaLM or Chinchilla, and improves upon concurrently developed models such as LLaMA 2 or Inflection-1. It nears the performance of PaLM-2-Large at a reduced pretraining and inference cost, making it, to our knowledge, one of the three best language models in the world along with GPT-4 and PaLM-2-Large. We report detailed evaluations, as well as a deep dive into the methods and custom tooling employed to pretrain Falcon. Notably, we report on our custom distributed training codebase, allowing us to efficiently pretrain these models on up to 4,096 A100s on cloud AWS infrastructure with limited interconnect. We release a 600B tokens extract of our web dataset, as well as the Falcon-7/40/180B models under a permissive license to foster open-science and accelerate the development of an open ecosystem of large language models.

Results

TaskDatasetMetricValueModel
Transfer LearningMMLAverage (%)57Falcon 40B
Transfer LearningMMLAverage (%)28Falcon 7B (5-shot)
Multi-Task LearningMMLAverage (%)57Falcon 40B
Multi-Task LearningMMLAverage (%)28Falcon 7B (5-shot)
Sentence CompletionHellaSwagAccuracy85.9Falcon-180B (0-shot)
Sentence CompletionHellaSwagAccuracy82.7Falcon-40B (0-shot)
Sentence CompletionHellaSwagAccuracy76.3Falcon-7B (0-shot)

Related Papers

Measuring Hong Kong Massive Multi-Task Language Understanding2025-05-04Effectiveness of Zero-shot-CoT in Japanese Prompts2025-03-09TUMLU: A Unified and Native Language Understanding Benchmark for Turkic Languages2025-02-16IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding2025-01-27DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning2025-01-22MMLU-CF: A Contamination-free Multi-task Language Understanding Benchmark2024-12-19Llama 3 Meets MoE: Efficient Upcycling2024-12-13Evaluating Gender Bias in Large Language Models2024-11-14