TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-style Generative Reading Comprehension

Multi-style Generative Reading Comprehension

Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita

2019-01-08ACL 2019 7Reading ComprehensionQuestion AnsweringText GenerationAbstractive Text Summarization
PaperPDF

Abstract

This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike most studies on RC that have focused on extracting an answer span from the provided passages, our model instead focuses on generating a summary from the question and multiple passages. This serves to cover various answer styles required for real-world applications. Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved. This also enables our model to give an answer in the target style. Experiments show that our model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the transfer of the style-independent NLG capability to the target style is the key to its success.

Results

TaskDatasetMetricValueModel
Question AnsweringNarrativeQABLEU-154.11Masque (NarrativeQA + MS MARCO)
Question AnsweringNarrativeQABLEU-430.43Masque (NarrativeQA + MS MARCO)
Question AnsweringNarrativeQAMETEOR26.13Masque (NarrativeQA + MS MARCO)
Question AnsweringNarrativeQARouge-L59.87Masque (NarrativeQA + MS MARCO)
Question AnsweringNarrativeQABLEU-148.7Masque (NarrativeQA only)
Question AnsweringNarrativeQABLEU-420.98Masque (NarrativeQA only)
Question AnsweringNarrativeQAMETEOR21.95Masque (NarrativeQA only)
Question AnsweringNarrativeQARouge-L54.74Masque (NarrativeQA only)
Question AnsweringMS MARCOBLEU-143.77Masque Q&A Style
Question AnsweringMS MARCORouge-L52.2Masque Q&A Style

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16