TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dynamic Coattention Networks For Question Answering

Dynamic Coattention Networks For Question Answering

Caiming Xiong, Victor Zhong, Richard Socher

2016-11-05Question Answering
PaperPDFCodeCodeCodeCodeCodeCode

Abstract

Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.

Results

TaskDatasetMetricValueModel
Question AnsweringSQuAD1.1 devEM65.4DCN
Question AnsweringSQuAD1.1 devF175.6DCN
Question AnsweringSQuAD1.1EM71.625Dynamic Coattention Networks (ensemble)
Question AnsweringSQuAD1.1F180.383Dynamic Coattention Networks (ensemble)
Question AnsweringSQuAD1.1EM66.233Dynamic Coattention Networks (single model)
Question AnsweringSQuAD1.1F175.896Dynamic Coattention Networks (single model)
Question AnsweringSQuAD1.1EM66.2DCN
Open-Domain Question AnsweringSQuAD1.1EM66.2DCN

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09