TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Hurdles to Progress in Long-form Question Answering

Hurdles to Progress in Long-form Question Answering

Kalpesh Krishna, Aurko Roy, Mohit Iyyer

2021-03-10NAACL 2021 4Question AnsweringText GenerationLong Form Question AnsweringFormOpen-Domain Question AnsweringOpen-Domain Dialog
PaperPDFCodeCode(official)

Abstract

The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / validation overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We offer suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future.

Results

TaskDatasetMetricValueModel
Question AnsweringKILT: ELI5F123.1c-REALM
Question AnsweringKILT: ELI5Rouge-L23.4c-REALM
Question AnsweringKILT: ELI5F122.88arxiv.org/abs/2103.06332
Question AnsweringKILT: ELI5KILT-F12.34arxiv.org/abs/2103.06332
Question AnsweringKILT: ELI5KILT-RL2.36arxiv.org/abs/2103.06332
Question AnsweringKILT: ELI5R-Prec10.67arxiv.org/abs/2103.06332
Question AnsweringKILT: ELI5ROUGE-L23.19arxiv.org/abs/2103.06332
Question AnsweringKILT: ELI5Recall@524.56arxiv.org/abs/2103.06332
Open-Domain Question AnsweringKILT: ELI5F122.88arxiv.org/abs/2103.06332
Open-Domain Question AnsweringKILT: ELI5KILT-F12.34arxiv.org/abs/2103.06332
Open-Domain Question AnsweringKILT: ELI5KILT-RL2.36arxiv.org/abs/2103.06332
Open-Domain Question AnsweringKILT: ELI5R-Prec10.67arxiv.org/abs/2103.06332
Open-Domain Question AnsweringKILT: ELI5ROUGE-L23.19arxiv.org/abs/2103.06332
Open-Domain Question AnsweringKILT: ELI5Recall@524.56arxiv.org/abs/2103.06332

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16