Learning Knowledge Graph Embeddings with Type Regularizer
Bhushan Kotnis, Vivi Nastase
Abstract
Learning relations based on evidence from knowledge bases relies on processing the available relation instances. Many relations, however, have clear domain and range, which we hypothesize could help learn a better, more generalizing, model. We include such information in the RESCAL model in the form of a regularization factor added to the loss function that takes into account the types (categories) of the entities that appear as arguments to relations in the knowledge base. We note increased performance compared to the baseline model in terms of mean reciprocal rank and hits@N, N = 1, 3, 10. Furthermore, we discover scenarios that significantly impact the effectiveness of the type regularizer.
Related Papers
Real-World Deployment of a Lane Change Prediction Architecture Based on Knowledge Graph Embeddings and Bayesian Inference2025-06-13Predicate-Conditional Conformalized Answer Sets for Knowledge Graph Embeddings2025-05-22A Systematic Evaluation of Knowledge Graph Embeddings for Gene-Disease Association Prediction2025-04-11ConceptFormer: Towards Efficient Use of Knowledge-Graph Embeddings in Large Language Models2025-04-10Comparison of Metadata Representation Models for Knowledge Graph Embeddings2025-03-25Corporate Fraud Detection in Rich-yet-Noisy Financial Graph2025-02-26SparseTransX: Efficient Training of Translation-Based Knowledge Graph Embeddings Using Sparse Matrix Operations2025-02-24PathE: Leveraging Entity-Agnostic Paths for Parameter-Efficient Knowledge Graph Embeddings2025-01-31