TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Unified MRC Framework for Named Entity Recognition

A Unified MRC Framework for Named Entity Recognition

Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, Jiwei Li

2019-10-25ACL 2020 6Nested Named Entity RecognitionReading ComprehensionNamed Entity RecognitionEntity Extraction using GANChinese Named Entity RecognitionNamed Entity Recognition (NER)Machine Reading ComprehensionNested Mention Recognition
PaperPDFCodeCodeCodeCodeCode(official)CodeCodeCode

Abstract

The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models, the most widely used backbone for flat NER, are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the \textsc{per} label is formalized as extracting answer spans to the question "{\it which person is mentioned in the text?}". This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities for different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. We conduct experiments on both {\em nested} and {\em flat} NER datasets. Experimental results demonstrate the effectiveness of the proposed formulation. We are able to achieve vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, along with SOTA results on flat NER datasets, i.e.,+0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0.

Results

TaskDatasetMetricValueModel
Named Entity Recognition (NER)Ontonotes v5 (English)F191.11BERT-MRC
Named Entity Recognition (NER)ACE 2005F186.88BERT-MRC
Named Entity Recognition (NER)CoNLL 2003 (English)F193.04BERT-MRC
Named Entity Recognition (NER)MSRAF195.75BERT-MRC
Named Entity Recognition (NER)OntoNotes 4F182.11BERT-MRC
Nested Mention RecognitionACE 2004F185.98BERT-MRC

Related Papers

Flippi: End To End GenAI Assistant for E-Commerce2025-07-08DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02Selecting and Merging: Towards Adaptable and Scalable Named Entity Recognition with Large Language Models2025-06-28Chaining Event Spans for Temporal Relation Grounding2025-06-17S2ST-Omni: An Efficient and Scalable Multilingual Speech-to-Speech Translation Framework via Seamless Speech-Text Alignment and Streaming Speech Generation2025-06-11CoMuMDR: Code-mixed Multi-modal Multi-domain corpus for Discourse paRsing in conversations2025-06-10Automatic Generation of Inference Making Questions for Reading Comprehension Assessments2025-06-09SCOP: Evaluating the Comprehension Process of Large Language Models from a Cognitive View2025-06-05