TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Fact-driven Logical Reasoning for Machine Reading Comprehe...

Fact-driven Logical Reasoning for Machine Reading Comprehension

Siru Ouyang, Zhuosheng Zhang, Hai Zhao

2021-05-21NeurIPS 2021 12Reading ComprehensionLogical ReasoningMachine Reading Comprehension
PaperPDFCode(official)Code(official)

Abstract

Recent years have witnessed an increasing interest in training machines with reasoning ability, which deeply relies on accurately and clearly presented clue forms. The clues are usually modeled as entity-aware knowledge in existing studies. However, those entity-aware clues are primarily focused on commonsense, making them insufficient for tasks that require knowledge of temporary facts or events, particularly in logical reasoning for reading comprehension. To address this challenge, we are motivated to cover both commonsense and temporary knowledge clues hierarchically. Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence, such as the subject-verb-object formed ``facts''. We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions (concepts or actions inside a fact). Experimental results on logical reasoning benchmarks and dialogue modeling datasets show that our approach improves the baselines substantially, and it is general across backbone models. Code is available at \url{https://github.com/ozyyshr/FocalReasoner}.

Results

TaskDatasetMetricValueModel
Reading ComprehensionReClorTest58.9RoBERTa-single

Related Papers

FEVO: Financial Knowledge Expansion and Reasoning Evolution for Large Language Models2025-07-08DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02MiCo: Multi-image Contrast for Reinforcement Visual Reasoning2025-06-27Chaining Event Spans for Temporal Relation Grounding2025-06-17Discrete JEPA: Learning Discrete Token Representations without Reconstruction2025-06-17SoundMind: RL-Incentivized Logic Reasoning for Audio-Language Models2025-06-15CAPO: Reinforcing Consistent Reasoning in Medical Decision-Making2025-06-15TeleMath: A Benchmark for Large Language Models in Telecom Mathematical Problem Solving2025-06-12