TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An MRC Framework for Semantic Role Labeling

An MRC Framework for Semantic Role Labeling

Nan Wang, Jiwei Li, Yuxian Meng, Xiaofei Sun, Han Qiu, Ziyao Wang, Guoyin Wang, Jun He

2021-09-14COLING 2022 10Reading ComprehensionSemantic Role LabelingMachine Reading ComprehensionMultiple-choice
PaperPDFCode(official)

Abstract

Semantic Role Labeling (SRL) aims at recognizing the predicate-argument structure of a sentence and can be decomposed into two subtasks: predicate disambiguation and argument labeling. Prior work deals with these two tasks independently, which ignores the semantic connection between the two tasks. In this paper, we propose to use the machine reading comprehension (MRC) framework to bridge this gap. We formalize predicate disambiguation as multiple-choice machine reading comprehension, where the descriptions of candidate senses of a given predicate are used as options to select the correct sense. The chosen predicate sense is then used to determine the semantic roles for that predicate, and these semantic roles are used to construct the query for another MRC model for argument labeling. In this way, we are able to leverage both the predicate semantics and the semantic role semantics for argument labeling. We also propose to select a subset of all the possible semantic roles for computational efficiency. Experiments show that the proposed framework achieves state-of-the-art or comparable results to previous work. Code is available at \url{https://github.com/ShannonAI/MRC-SRL}.

Results

TaskDatasetMetricValueModel
Semantic Role LabelingOntoNotesF188.3MRC-SRL
Semantic Role LabelingCoNLL 2005F190MRC-SRL

Related Papers

The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17HATS: Hindi Analogy Test Set for Evaluating Reasoning in Large Language Models2025-07-17MateInfoUB: A Real-World Benchmark for Testing LLMs in Competitive, Multilingual, and Multimodal Educational Tasks2025-07-03DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02Advanced Financial Reasoning at Scale: A Comprehensive Evaluation of Large Language Models on CFA Level III2025-06-29OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs2025-06-26Adapting Vision-Language Models for Evaluating World Models2025-06-22PhysUniBench: An Undergraduate-Level Physics Reasoning Benchmark for Multimodal Models2025-06-21