TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Quaternion Knowledge Graph Embeddings

Quaternion Knowledge Graph Embeddings

Shuai Zhang, Yi Tay, Lina Yao, Qi Liu

2019-04-23NeurIPS 2019 12Knowledge GraphsKnowledge Graph EmbeddingRepresentation LearningKnowledge Graph EmbeddingsKnowledge Graph CompletionGraph EmbeddingLink Prediction
PaperPDFCode

Abstract

In this work, we move beyond the traditional complex-valued representations, introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings. More specifically, quaternion embeddings, hypercomplex-valued embeddings with three imaginary components, are utilized to represent entities. Relations are modelled as rotations in the quaternion space. The advantages of the proposed approach are: (1) Latent inter-dependencies (between all components) are aptly captured with Hamilton product, encouraging a more compact interaction between entities and relations; (2) Quaternions enable expressive rotation in four-dimensional space and have more degree of freedom than rotation in complex plane; (3) The proposed framework is a generalization of ComplEx on hypercomplex space while offering better geometrical interpretations, concurrently satisfying the key desiderata of relational representation learning (i.e., modeling symmetry, anti-symmetry and inversion). Experimental results demonstrate that our method achieves state-of-the-art performance on four well-established knowledge graph completion benchmarks.

Results

TaskDatasetMetricValueModel
Link Prediction FB15kHits@10.8QuatE
Link Prediction FB15kHits@100.9QuatE
Link Prediction FB15kHits@30.859QuatE
Link Prediction FB15kMR17QuatE
Link Prediction FB15kMRR0.833QuatE
Link PredictionWN18RRHits@10.438QuatE
Link PredictionWN18RRHits@100.582QuatE
Link PredictionWN18RRHits@30.508QuatE
Link PredictionWN18RRMR2314QuatE
Link PredictionWN18RRMRR0.488QuatE
Link PredictionWN18Hits@10.945QuatE
Link PredictionWN18Hits@100.959QuatE
Link PredictionWN18Hits@30.954QuatE
Link PredictionWN18MR162QuatE
Link PredictionWN18MRR0.95QuatE
Link PredictionFB15k-237Hits@10.248QuatE
Link PredictionFB15k-237Hits@100.55QuatE
Link PredictionFB15k-237Hits@30.382QuatE
Link PredictionFB15k-237MR87QuatE
Link PredictionFB15k-237MRR0.348QuatE

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15