TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Answering Open-Domain Questions of Varying Reasoning Steps...

Answering Open-Domain Questions of Varying Reasoning Steps from Text

Peng Qi, Haejun Lee, Oghenetegiri "TG" Sido, Christopher D. Manning

2020-10-23EMNLP 2021 11Question AnsweringRerankingOpen-Domain Question AnsweringRetrieval
PaperPDFCode(official)

Abstract

We develop a unified system to answer directly from text open-domain questions that may require a varying number of retrieval steps. We employ a single multi-task transformer model to perform all the necessary subtasks -- retrieving supporting facts, reranking them, and predicting the answer from all retrieved documents -- in an iterative fashion. We avoid crucial assumptions of previous work that do not transfer well to real-world settings, including exploiting knowledge of the fixed number of retrieval steps required to answer each question or using structured metadata like knowledge bases or web links that have limited availability. Instead, we design a system that can answer open-domain questions on any text collection without prior knowledge of reasoning complexity. To emulate this setting, we construct a new benchmark, called BeerQA, by combining existing one- and two-step datasets with a new collection of 530 questions that require three Wikipedia pages to answer, unifying Wikipedia corpora versions in the process. We show that our model demonstrates competitive performance on both existing benchmarks and this new benchmark. We make the new benchmark available at https://beerqa.github.io/.

Results

TaskDatasetMetricValueModel
Question AnsweringHotpotQAANS-EM0.663IRRR+
Question AnsweringHotpotQAANS-F10.791IRRR+
Question AnsweringHotpotQAJOINT-EM0.428IRRR+
Question AnsweringHotpotQAJOINT-F10.696IRRR+
Question AnsweringHotpotQASUP-EM0.569IRRR+
Question AnsweringHotpotQASUP-F10.832IRRR+
Question AnsweringHotpotQAANS-EM0.657IRRR
Question AnsweringHotpotQAANS-F10.782IRRR
Question AnsweringHotpotQAJOINT-EM0.421IRRR
Question AnsweringHotpotQAJOINT-F10.686IRRR
Question AnsweringHotpotQASUP-EM0.559IRRR
Question AnsweringHotpotQASUP-F10.821IRRR

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16