TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Logiformer: A Two-Branch Graph Transformer Network for Int...

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan, Lingling Zhang

2022-05-02Reading ComprehensionLogical ReasoningVocal Bursts Valence PredictionMachine Reading Comprehension
PaperPDFCode(official)

Abstract

Machine reading comprehension has aroused wide concerns, since it explores the potential of model for text understanding. To further equip the machine with the reasoning capability, the challenging task of logical reasoning is proposed. Previous works on logical reasoning have proposed some strategies to extract the logical units from different aspects. However, there still remains a challenge to model the long distance dependency among the logical units. Also, it is demanding to uncover the logical structures of the text and further fuse the discrete logic to the continuous text embedding. To tackle the above issues, we propose an end-to-end model Logiformer which utilizes a two-branch graph transformer network for logical reasoning of text. Firstly, we introduce different extraction strategies to split the text into two sets of logical units, and construct the logical graph and the syntax graph respectively. The logical graph models the causal relations for the logical branch while the syntax graph captures the co-occurrence relations for the syntax branch. Secondly, to model the long distance dependency, the node sequence from each graph is fed into the fully connected graph transformer structures. The two adjacent matrices are viewed as the attention biases for the graph transformer layers, which map the discrete logical structures to the continuous text embedding space. Thirdly, a dynamic gate mechanism and a question-aware self-attention module are introduced before the answer prediction to update the features. The reasoning process provides the interpretability by employing the logical units, which are consistent with human cognition. The experimental results show the superiority of our model, which outperforms the state-of-the-art single model on two logical reasoning benchmarks.

Results

TaskDatasetMetricValueModel
Reading ComprehensionReClorTest63.5RoBERTa-single

Related Papers

FEVO: Financial Knowledge Expansion and Reasoning Evolution for Large Language Models2025-07-08DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02MiCo: Multi-image Contrast for Reinforcement Visual Reasoning2025-06-27Chaining Event Spans for Temporal Relation Grounding2025-06-17Discrete JEPA: Learning Discrete Token Representations without Reconstruction2025-06-17SoundMind: RL-Incentivized Logic Reasoning for Audio-Language Models2025-06-15CAPO: Reinforcing Consistent Reasoning in Medical Decision-Making2025-06-15TeleMath: A Benchmark for Large Language Models in Telecom Mathematical Problem Solving2025-06-12