TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-task Pre-training Language Model for Semantic Networ...

Multi-task Pre-training Language Model for Semantic Network Completion

Da Li, Sen yang, Kele Xu, Ming Yi, Yukai He, Huaimin Wang

2022-01-13Knowledge GraphsKnowledge Graph EmbeddingData AugmentationKnowledge Graph CompletionContrastive LearningLanguage ModellingLink Prediction
PaperPDFCode(official)

Abstract

Semantic networks, such as the knowledge graph, can represent the knowledge leveraging the graph structure. Although the knowledge graph shows promising values in natural language processing, it suffers from incompleteness. This paper focuses on knowledge graph completion by predicting linkage between entities, which is a fundamental yet critical task. Semantic matching is a potential solution as it can deal with unseen entities, which the translational distance based methods struggle with. However, to achieve competitive performance as translational distance based methods, semantic matching based methods require large-scale datasets for the training purpose, which are typically unavailable in practical settings. Therefore, we employ the language model and introduce a novel knowledge graph architecture named LP-BERT, which contains two main stages: multi-task pre-training and knowledge graph fine-tuning. In the pre-training phase, three tasks are taken to drive the model to learn the relationship from triples by predicting either entities or relations. While in the fine-tuning phase, inspired by contrastive learning, we design a triple-style negative sampling in a batch, which greatly increases the proportion of negative sampling while keeping the training time almost unchanged. Furthermore, we propose a new data augmentation method utilizing the inverse relationship of triples to improve the performance and robustness of the model. To demonstrate the effectiveness of our method, we conduct extensive experiments on three widely-used datasets, WN18RR, FB15k-237, and UMLS. The experimental results demonstrate the superiority of our methods, and our approach achieves state-of-the-art results on WN18RR and FB15k-237 datasets. Significantly, Hits@10 indicator is improved by 5% from previous state-of-the-art result on the WN18RR dataset while reaching 100% on the UMLS dataset.

Results

TaskDatasetMetricValueModel
Link PredictionUMLSHits@101LP-BERT
Link PredictionUMLSMR1.18LP-BERT
Link PredictionWN18RRHits@10.343LP-BERT
Link PredictionWN18RRHits@100.752LP-BERT
Link PredictionWN18RRHits@30.563LP-BERT
Link PredictionWN18RRMR92LP-BERT
Link PredictionWN18RRMRR0.482LP-BERT
Link PredictionFB15k-237Hits@10.223LP-BERT
Link PredictionFB15k-237Hits@100.49LP-BERT
Link PredictionFB15k-237Hits@30.336LP-BERT
Link PredictionFB15k-237MR154LP-BERT
Link PredictionFB15k-237MRR0.31LP-BERT

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17