TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Simple Yet Strong Pipeline for HotpotQA

A Simple Yet Strong Pipeline for HotpotQA

Dirk Groeneveld, Tushar Khot, Mausam, Ashish Sabharwal

2020-04-14EMNLP 2020 11Question Answeringnamed-entity-recognitionNamed Entity RecognitionMulti-hop Question AnsweringNamed Entity Recognition (NER)
PaperPDF

Abstract

State-of-the-art models for multi-hop question answering typically augment large-scale language models like BERT with additional, intuitively useful capabilities such as named entity recognition, graph-based reasoning, and question decomposition. However, does their strong performance on popular multi-hop datasets really justify this added design complexity? Our results suggest that the answer may be no, because even our simple pipeline based on BERT, named Quark, performs surprisingly well. Specifically, on HotpotQA, Quark outperforms these models on both question answering and support identification (and achieves performance very close to a RoBERTa model). Our pipeline has three steps: 1) use BERT to identify potentially relevant sentences independently of each other; 2) feed the set of selected sentences as context into a standard BERT span prediction model to choose an answer; and 3) use the sentence selection model, now with the chosen answer, to produce supporting sentences. The strong performance of Quark resurfaces the importance of carefully exploring simple model designs before using popular benchmarks to justify the value of complex techniques.

Results

TaskDatasetMetricValueModel
Question AnsweringHotpotQAANS-EM0.555Quark + SemanticRetrievalMRS IR
Question AnsweringHotpotQAANS-F10.675Quark + SemanticRetrievalMRS IR
Question AnsweringHotpotQAJOINT-EM0.329Quark + SemanticRetrievalMRS IR
Question AnsweringHotpotQAJOINT-F10.562Quark + SemanticRetrievalMRS IR
Question AnsweringHotpotQASUP-EM0.456Quark + SemanticRetrievalMRS IR
Question AnsweringHotpotQASUP-F10.73Quark + SemanticRetrievalMRS IR

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09