TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Biomedical Pretrained Language Models with Knowl...

Improving Biomedical Pretrained Language Models with Knowledge

Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang

2021-04-21NAACL (BioNLP) 2021 6Relation Extractionnamed-entity-recognitionEntity LinkingNamed Entity RecognitionNamed Entity Recognition (NER)Language Modelling
PaperPDFCode(official)

Abstract

Pretrained language models have shown success in many natural language processing tasks. Many works explore incorporating knowledge into language models. In the biomedical domain, experts have taken decades of effort on building large-scale knowledge bases. For example, the Unified Medical Language System (UMLS) contains millions of entities with their synonyms and defines hundreds of relations among entities. Leveraging this knowledge can benefit a variety of downstream tasks such as named entity recognition and relation extraction. To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases. Specifically, we extract entities from PubMed abstracts and link them to UMLS. We then train a knowledge-aware language model that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation. Besides, we add two training objectives as entity detection and entity linking. Experiments on the named entity recognition and relation extraction from the BLURB benchmark demonstrate the effectiveness of our approach. Further analysis on a collected probing dataset shows that our model has better ability to model medical knowledge.

Results

TaskDatasetMetricValueModel
Relation ExtractionGADF184.3KeBioLM
Relation ExtractionDDIF181.9KeBioLM
Relation ExtractionChemProtF177.5KeBioLM
Named Entity Recognition (NER)NCBI-diseaseF189.1KeBioLM
Named Entity Recognition (NER)BC5CDR-chemicalF193.3KeBioLM
Named Entity Recognition (NER)BC5CDR-diseaseF186.1KeBioLM
Named Entity Recognition (NER)BC2GMF185.1KeBioLM
Named Entity Recognition (NER)JNLPBAF182KeBioLM

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16