TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/KELM: Knowledge Enhanced Pre-Trained Language Representati...

KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs

Yinquan Lu, Haonan Lu, Guirong Fu, Qun Liu

2021-09-09Reading ComprehensionQuestion AnsweringCommon Sense ReasoningWorld KnowledgeMachine Reading ComprehensionLanguage Modelling
PaperPDFCode(official)

Abstract

Incorporating factual knowledge into pre-trained language models (PLM) such as BERT is an emerging trend in recent NLP studies. However, most of the existing methods combine the external knowledge integration module with a modified pre-training loss and re-implement the pre-training process on the large-scale corpus. Re-pretraining these models is usually resource-consuming, and difficult to adapt to another domain with a different knowledge graph (KG). Besides, those works either cannot embed knowledge context dynamically according to textual context or struggle with the knowledge ambiguity issue. In this paper, we propose a novel knowledge-aware language model framework based on fine-tuning process, which equips PLM with a unified knowledge-enhanced text graph that contains both text and multi-relational sub-graphs extracted from KG. We design a hierarchical relational-graph-based message passing mechanism, which can allow the representations of injected KG and text to mutually update each other and can dynamically select ambiguous mentioned entities that share the same text. Our empirical results show that our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT, and achieve significant improvement on the machine reading comprehension (MRC) task compared with other knowledge-enhanced models.

Results

TaskDatasetMetricValueModel
Question AnsweringCOPAAccuracy78KELM (finetuning BERT-large based single model)
Question AnsweringMultiRCEM27.2KELM (finetuning BERT-large based single model)
Question AnsweringMultiRCF170.8KELM (finetuning BERT-large based single model)
Common Sense ReasoningReCoRDEM89.1KELM (finetuning RoBERTa-large based single model)
Common Sense ReasoningReCoRDF189.6KELM (finetuning RoBERTa-large based single model)
Common Sense ReasoningReCoRDEM76.2KELM (finetuning BERT-large based single model)
Common Sense ReasoningReCoRDF176.7KELM (finetuning BERT-large based single model)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17HRSeg: High-Resolution Visual Perception and Enhancement for Reasoning Segmentation2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17