TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/NSCaching: Simple and Efficient Negative Sampling for Know...

NSCaching: Simple and Efficient Negative Sampling for Knowledge Graph Embedding

Yongqi Zhang, Quanming Yao, Yingxia Shao, Lei Chen

2018-12-16Knowledge Graph EmbeddingReinforcement LearningGraph EmbeddingLink Prediction
PaperPDFCodeCodeCodeCodeCodeCode

Abstract

Knowledge Graph (KG) embedding is a fundamental problem in data mining research with many real-world applications. It aims to encode the entities and relations in the graph into low dimensional vector space, which can be used for subsequent algorithms. Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding. Recently, generative adversarial network (GAN), has been introduced in negative sampling. By sampling negative triplets with large scores, these methods avoid the problem of vanishing gradient and thus obtain better performance. However, using GAN makes the original model more complex and hard to train, where reinforcement learning must be used. In this paper, motivated by the observation that negative triplets with large scores are important but rare, we propose to directly keep track of them with the cache. However, how to sample from and update the cache are two important questions. We carefully design the solutions, which are not only efficient but also achieve a good balance between exploration and exploitation. In this way, our method acts as a "distilled" version of previous GA-based methods, which does not waste training time on additional parameters to fit the full distribution of negative triplets. The extensive experiments show that our method can gain significant improvement in various KG embedding models, and outperform the state-of-the-art negative sampling methods based on GAN.

Results

TaskDatasetMetricValueModel
Link Prediction FB15kMRR0.7721ComplEx NSCaching
Link PredictionFB15kHits@100.8682ComplEx NSCaching
Link PredictionFB15kMR82ComplEx NSCaching
Link PredictionFB15kMRR0.7721ComplEx NSCaching
Link PredictionWN18RRHits@100.5089ComplEx NSCaching
Link PredictionWN18RRMR5365ComplEx NSCaching
Link PredictionWN18RRMRR0.4463ComplEx NSCaching
Link PredictionWN18Hits@100.9398ComplEx NSCaching
Link PredictionWN18MR1072ComplEx NSCaching
Link PredictionWN18MRR0.9355ComplEx NSCaching
Link PredictionFB15k-237Hits@100.4805ComplEx NSCaching
Link PredictionFB15k-237MR221ComplEx NSCaching
Link PredictionFB15k-237MRR0.3021ComplEx NSCaching

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17