TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Retrospective Reader for Machine Reading Comprehension

Retrospective Reader for Machine Reading Comprehension

Zhuosheng Zhang, Junjie Yang, Hai Zhao

2020-01-27Reading ComprehensionQuestion AnsweringMachine Reading Comprehension
PaperPDFCodeCode(official)

Abstract

Machine reading comprehension (MRC) is an AI challenge that requires machine to determine the correct answers to questions based on a given passage. MRC systems must not only answer question when necessary but also distinguish when no answer is available according to the given passage and then tactfully abstain from answering. When unanswerable questions are involved in the MRC task, an essential verification module called verifier is especially required in addition to the encoder, though the latest practice on MRC modeling still most benefits from adopting well pre-trained language models as the encoder block by only focusing on the "reading". This paper devotes itself to exploring better verifier design for the MRC task with unanswerable questions. Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction. The proposed reader is evaluated on two benchmark MRC challenge datasets SQuAD2.0 and NewsQA, achieving new state-of-the-art results. Significance tests show that our model is significantly better than the strong ELECTRA and ALBERT baselines. A series of analysis is also conducted to interpret the effectiveness of the proposed reader.

Results

TaskDatasetMetricValueModel
Question AnsweringSQuAD2.0EM90.578Retro-Reader (ensemble)
Question AnsweringSQuAD2.0F192.978Retro-Reader (ensemble)
Question AnsweringSQuAD2.0EM90.115Retro-Reader on ALBERT (ensemble)
Question AnsweringSQuAD2.0F192.58Retro-Reader on ALBERT (ensemble)
Question AnsweringSQuAD2.0EM90.115Retro-Reader on ALBERT (ensemble)
Question AnsweringSQuAD2.0F192.58Retro-Reader on ALBERT (ensemble)
Question AnsweringSQuAD2.0EM89.562Retro-Reader on ELECTRA (single model)
Question AnsweringSQuAD2.0F192.052Retro-Reader on ELECTRA (single model)
Question AnsweringSQuAD2.0EM88.107Retro-Reader on ALBERT (single model)
Question AnsweringSQuAD2.0F191.419Retro-Reader on ALBERT (single model)
Question AnsweringSQuAD2.0EM88.107Retro-Reader on ALBERT (single model)
Question AnsweringSQuAD2.0F191.419Retro-Reader on ALBERT (single model)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09