TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unified Named Entity Recognition as Word-Word Relation Cla...

Unified Named Entity Recognition as Word-Word Relation Classification

Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, Fei Li

2021-12-19Nested Named Entity Recognitionnamed-entity-recognitionNamed Entity RecognitionChinese Named Entity RecognitionNERClassificationRelation ClassificationNamed Entity Recognition (NER)
PaperPDFCode(official)

Abstract

So far, named entity recognition (NER) has been involved with three major types, including flat, overlapped (aka. nested), and discontinuous NER, which have mostly been studied individually. Recently, a growing interest has been built for unified NER, tackling the above three jobs concurrently with one single model. Current best-performing methods mainly include span-based and sequence-to-sequence models, where unfortunately the former merely focus on boundary identification and the latter may suffer from exposure bias. In this work, we present a novel alternative by modeling the unified NER as word-word relation classification, namely W^2NER. The architecture resolves the kernel bottleneck of unified NER by effectively modeling the neighboring relations between entity words with Next-Neighboring-Word (NNW) and Tail-Head-Word-* (THW-*) relations. Based on the W^2NER scheme we develop a neural framework, in which the unified NER is modeled as a 2D grid of word pairs. We then propose multi-granularity 2D convolutions for better refining the grid representations. Finally, a co-predictor is used to sufficiently reason the word-word relations. We perform extensive experiments on 14 widely-used benchmark datasets for flat, overlapped, and discontinuous NER (8 English and 6 Chinese datasets), where our model beats all the current top-performing baselines, pushing the state-of-the-art performances of unified NER.

Results

TaskDatasetMetricValueModel
Named Entity Recognition (NER)Ontonotes v5 (English)F190.5W2NER
Named Entity Recognition (NER)CoNLL 2003 (English)F193.07W2NER
Named Entity Recognition (NER)ACE 2005F186.79W2NER
Named Entity Recognition (NER)ACE 2004F187.52W2NER
Named Entity Recognition (NER)GENIAF181.39W2NER
Named Entity Recognition (NER)MSRAF196.1W2NER
Named Entity Recognition (NER)OntoNotes 4F183.08W2NER

Related Papers

Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)2025-07-13Flippi: End To End GenAI Assistant for E-Commerce2025-07-08Fuzzy Classification Aggregation for a Continuum of Agents2025-07-06Hybrid-View Attention for csPCa Classification in TRUS2025-07-04Selecting and Merging: Towards Adaptable and Scalable Named Entity Recognition with Large Language Models2025-06-28