TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/UniversalNER: Targeted Distillation from Large Language Mo...

UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition

Wenxuan Zhou, Sheng Zhang, Yu Gu, Muhao Chen, Hoifung Poon

2023-08-07named-entity-recognitionNamed Entity RecognitionNEROpen Information ExtractionNamed Entity Recognition (NER)
PaperPDFCodeCode

Abstract

Large language models (LLMs) have demonstrated remarkable generalizability, such as understanding arbitrary entities and relations. Instruction tuning has proven effective for distilling LLMs into more cost-efficient models such as Alpaca and Vicuna. Yet such student models still trail the original LLMs by large margins in downstream applications. In this paper, we explore targeted distillation with mission-focused instruction tuning to train student models that can excel in a broad application class such as open information extraction. Using named entity recognition (NER) for case study, we show how ChatGPT can be distilled into much smaller UniversalNER models for open NER. For evaluation, we assemble the largest NER benchmark to date, comprising 43 datasets across 9 diverse domains such as biomedicine, programming, social media, law, finance. Without using any direct supervision, UniversalNER attains remarkable NER accuracy across tens of thousands of entity types, outperforming general instruction-tuned models such as Alpaca and Vicuna by over 30 absolute F1 points in average. With a tiny fraction of parameters, UniversalNER not only acquires ChatGPT's capability in recognizing arbitrary entity types, but also outperforms its NER accuracy by 7-9 absolute F1 points in average. Remarkably, UniversalNER even outperforms by a large margin state-of-the-art multi-task instruction-tuned systems such as InstructUIE, which uses supervised NER examples. We also conduct thorough ablation studies to assess the impact of various components in our distillation approach. We release the distillation recipe, data, and UniversalNER models to facilitate future research on targeted distillation.

Results

TaskDatasetMetricValueModel
Named Entity Recognition (NER)ACE 2005F186.69UniNER-7B
Named Entity Recognition (NER)CoNLL03F193.3UniNER-7B
Named Entity Recognition (NER)FindVehicleF198.3UniNER-7B
Named Entity Recognition (NER)AnatEMF188.65UniNER-7B
Named Entity Recognition (NER)BC4CHEMDF189.21UniNER-7B
Named Entity Recognition (NER)BC2GMF182.42UniNER-7B
Named Entity Recognition (NER)BC5CDRF189.34UniNER-7B
Named Entity Recognition (NER)GENIAF177.54UniNER-7B
Named Entity Recognition (NER)NCBI DiseaseF186.96UniNER-7B

Related Papers

Flippi: End To End GenAI Assistant for E-Commerce2025-07-08Selecting and Merging: Towards Adaptable and Scalable Named Entity Recognition with Large Language Models2025-06-28Improving Named Entity Transcription with Contextual LLM-based Revision2025-06-12Better Semi-supervised Learning for Multi-domain ASR Through Incremental Retraining and Data Filtering2025-06-05Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective2025-06-05Efficient Data Selection for Domain Adaptation of ASR Using Pseudo-Labels and Multi-Stage Filtering2025-06-04EL4NER: Ensemble Learning for Named Entity Recognition via Multiple Small-Parameter Large Language Models2025-05-29Label-Guided In-Context Learning for Named Entity Recognition2025-05-29