TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/The GEM Benchmark: Natural Language Generation, its Evalua...

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaustubh D. Dhole, Wanyu Du, Esin Durmus, Ondřej Dušek, Chris Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Rubungo Andre Niyongabo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, Jiawei Zhou

2021-02-02ACL (GEM) 2021 8Question AnsweringData-to-Text GenerationText GenerationAbstractive Text SummarizationCross-Lingual Abstractive SummarizationExtreme SummarizationTask-Oriented Dialogue SystemsText Simplification
PaperPDF

Abstract

We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.

Results

TaskDatasetMetricValueModel
DialogueSGDMETEOR0.331T5
DialogueSGDMETEOR0.089BART
Text GenerationCommonGenMETEOR0.301BART
Text GenerationCommonGenMETEOR0.291T5
Text GenerationCzech restaurant informationMETEOR0.167TGen++
Text GenerationCzech restaurant informationMETEOR0.152TGen
Text GenerationCzech restaurant informationMETEOR0.151TGen+
Text GenerationDARTMETEOR0.115T5
Text GenerationDARTMETEOR0.107BART
Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.394LSTM
Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.391TGen
Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.373BART
Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.369T5
Text GenerationWebNLG ruMETEOR0.613mBART
Text GenerationWebNLG ruMETEOR0.18mT5
Text GenerationToTToMETEOR0.363T5
Text GenerationWebNLG enMETEOR0.462mBART
Text GenerationWebNLG enMETEOR0.287mT5
Text SimplificationTurkCorpusMETEOR0.649T5
Text SimplificationTurkCorpusMETEOR0.556BART
Text SimplificationASSETMETEOR0.581T5
Text SimplificationASSETMETEOR0.56BART
Text SummarizationMLSUM esMETEOR0.21mBART
Text SummarizationMLSUM deMETEOR0.437mBART
Abstractive Text SummarizationMLSUM esMETEOR0.21mBART
Abstractive Text SummarizationMLSUM deMETEOR0.437mBART
Data-to-Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.394LSTM
Data-to-Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.391TGen
Data-to-Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.373BART
Data-to-Text GenerationCleaned E2E NLG ChallengeMETEOR (Validation set)0.369T5
Data-to-Text GenerationWebNLG ruMETEOR0.613mBART
Data-to-Text GenerationWebNLG ruMETEOR0.18mT5
Data-to-Text GenerationToTToMETEOR0.363T5
Data-to-Text GenerationWebNLG enMETEOR0.462mBART
Data-to-Text GenerationWebNLG enMETEOR0.287mT5
Extreme SummarizationGEM-XSumROUGE-223.2PEGASUS
Extreme SummarizationXSumMETEOR0.216PEGASUS
Task-Oriented Dialogue SystemsSGDMETEOR0.331T5
Task-Oriented Dialogue SystemsSGDMETEOR0.089BART

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16