TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/QANet: Combining Local Convolution with Global Self-Attent...

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension

Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

2018-04-23ICLR 2018 1Reading ComprehensionMachine TranslationQuestion AnsweringTranslation
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.

Results

TaskDatasetMetricValueModel
Question AnsweringSQuAD1.1 devEM75.1QANet (data aug x3)
Question AnsweringSQuAD1.1 devF183.8QANet (data aug x3)
Question AnsweringSQuAD1.1 devEM74.5QANet (data aug x2)
Question AnsweringSQuAD1.1 devF183.2QANet (data aug x2)
Question AnsweringSQuAD1.1 devEM73.6QANet
Question AnsweringSQuAD1.1 devF182.7QANet
Question AnsweringSQuAD1.1EM76.2QANet + data augmentation ×3
Question AnsweringSQuAD1.1F184.6QANet + data augmentation ×3

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Function-to-Style Guidance of LLMs for Code Translation2025-07-15