TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LUKE: Deep Contextualized Entity Representations with Enti...

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto

2020-10-02EMNLP 2020 11Question AnsweringRelation ExtractionCommon Sense ReasoningExtractive Question-AnsweringNamed Entity RecognitionRelation ClassificationNamed Entity Recognition (NER)Entity TypingLanguage Modelling
PaperPDFCodeCodeCodeCodeCodeCode(official)CodeCodeCode

Abstract

Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.

Results

TaskDatasetMetricValueModel
Relation ExtractionTACREDF1 (1% Few-Shot)17LUKE
Relation ExtractionTACREDF1 (5% Few-Shot)51.6LUKE
Relation ExtractionTACREDF172.7LUKE 483M
Relation ClassificationTACREDF172.7LUKE 483M
Question AnsweringSQuAD1.1 devEM89.8LUKE
Question AnsweringSQuAD1.1 devF195LUKE 483M
Question AnsweringSQuAD1.1EM90.202LUKE (single model)
Question AnsweringSQuAD1.1F195.379LUKE (single model)
Question AnsweringSQuAD1.1EM90.202LUKE (single model)
Question AnsweringSQuAD1.1F195.379LUKE (single model)
Question AnsweringSQuAD1.1EM90.2LUKE
Question AnsweringSQuAD1.1F195.4LUKE 483M
Question AnsweringSQuAD2.0EM87.429LUKE (single model)
Question AnsweringSQuAD2.0F190.163LUKE (single model)
Question AnsweringSQuAD2.0EM87.429LUKE (single model)
Question AnsweringSQuAD2.0F190.163LUKE (single model)
Question AnsweringSQuAD2.0F190.2LUKE 483M
Common Sense ReasoningReCoRDEM90.6LUKE 483M
Common Sense ReasoningReCoRDF191.2LUKE 483M
Named Entity Recognition (NER)CoNLL 2003 (English)F194.3LUKE 483M
Named Entity Recognition (NER)CoNLL++F195.89LUKE(Large)
Entity TypingOpen EntityF178.2MLMET

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17