TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-Distillation with Meta Learning for Knowledge Graph C...

Self-Distillation with Meta Learning for Knowledge Graph Completion

Yunshui Li, Junhao Liu, Chengming Li, Min Yang

2023-05-20Findings of the Association for Computational Linguistics: EMNLP 2022 2022 12Meta-LearningKnowledge Graph CompletionTransfer Learning
PaperPDFCode(official)

Abstract

In this paper, we propose a selfdistillation framework with meta learning(MetaSD) for knowledge graph completion with dynamic pruning, which aims to learn compressed graph embeddings and tackle the longtail samples. Specifically, we first propose a dynamic pruning technique to obtain a small pruned model from a large source model, where the pruning mask of the pruned model could be updated adaptively per epoch after the model weights are updated. The pruned model is supposed to be more sensitive to difficult to memorize samples(e.g., longtail samples) than the source model. Then, we propose a onestep meta selfdistillation method for distilling comprehensive knowledge from the source model to the pruned model, where the two models coevolve in a dynamic manner during training. In particular, we exploit the performance of the pruned model, which is trained alongside the source model in one iteration, to improve the source models knowledge transfer ability for the next iteration via meta learning. Extensive experiments show that MetaSD achieves competitive performance compared to strong baselines, while being 10x smaller than baselines.

Results

TaskDatasetMetricValueModel
Link PredictionWN18RRHits@10.447MetaSD
Link PredictionWN18RRHits@100.57MetaSD
Link PredictionWN18RRHits@30.504MetaSD
Link PredictionWN18RRMRR0.491MetaSD
Link PredictionFB15k-237Hits@10.3MetaSD
Link PredictionFB15k-237Hits@100.571MetaSD
Link PredictionFB15k-237Hits@30.428MetaSD
Link PredictionFB15k-237MRR0.391MetaSD

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Imbalanced Regression Pipeline Recommendation2025-07-16CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Mixture of Experts in Large Language Models2025-07-15Robust-Multi-Task Gradient Boosting2025-07-15