TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SG-Net: Syntax-Guided Machine Reading Comprehension

SG-Net: Syntax-Guided Machine Reading Comprehension

Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang

2019-08-14Reading ComprehensionQuestion AnsweringMachine Reading ComprehensionLanguage Modelling
PaperPDFCode

Abstract

For machine reading comprehension, the capacity of effectively modeling the linguistic knowledge from the detail-riddled and lengthy passages and getting ride of the noises is essential to improve its performance. Traditional attentive models attend to all words without explicit constraint, which results in inaccurate concentration on some dispensable words. In this work, we propose using syntax to guide the text modeling by incorporating explicit syntactic constraints into attention mechanism for better linguistically motivated word representations. In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention. Syntax-guided network (SG-Net) is then composed of this extra SDOI-SAN and the SAN from the original Transformer encoder through a dual contextual architecture for better linguistics inspired representation. To verify its effectiveness, the proposed SG-Net is applied to typical pre-trained language model BERT which is right based on a Transformer encoder. Extensive experiments on popular benchmarks including SQuAD 2.0 and RACE show that the proposed SG-Net design helps achieve substantial performance improvement over strong baselines.

Results

TaskDatasetMetricValueModel
Question AnsweringSQuAD2.0 devEM85.1SG-Net
Question AnsweringSQuAD2.0 devF187.9SG-Net
Question AnsweringSQuAD2.0EM88.174XLNet + SG-Net Verifier (ensemble)
Question AnsweringSQuAD2.0F190.702XLNet + SG-Net Verifier (ensemble)
Question AnsweringSQuAD2.0EM87.238XLNet + SG-Net Verifier++ (single model)
Question AnsweringSQuAD2.0F190.071XLNet + SG-Net Verifier++ (single model)
Question AnsweringSQuAD2.0EM86.211SG-Net (ensemble)
Question AnsweringSQuAD2.0F188.848SG-Net (ensemble)
Question AnsweringSQuAD2.0EM86.211SG-Net (ensemble)
Question AnsweringSQuAD2.0F188.848SG-Net (ensemble)
Question AnsweringSQuAD2.0EM85.229SG-Net (single model)
Question AnsweringSQuAD2.0F187.926SG-Net (single model)
Question AnsweringSQuAD2.0EM85.229SG-Net (single model)
Question AnsweringSQuAD2.0F187.926SG-Net (single model)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17