TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/R-GCN: The R Could Stand for Random

R-GCN: The R Could Stand for Random

Vic Degraeve, Gilles Vandewiele, Femke Ongenae, Sofie Van Hoecke

2022-03-04Knowledge GraphsRepresentation LearningNode ClassificationLink Prediction
PaperPDFCode(official)

Abstract

The inception of the Relational Graph Convolutional Network (R-GCN) marked a milestone in the Semantic Web domain as a widely cited method that generalises end-to-end hierarchical representation learning to Knowledge Graphs (KGs). R-GCNs generate representations for nodes of interest by repeatedly aggregating parameterised, relation-specific transformations of their neighbours. However, in this paper, we argue that the the R-GCN's main contribution lies in this "message passing" paradigm, rather than the learned weights. To this end, we introduce the "Random Relational Graph Convolutional Network" (RR-GCN), which leaves all parameters untrained and thus constructs node embeddings by aggregating randomly transformed random representations from neighbours, i.e., with no learned parameters. We empirically show that RR-GCNs can compete with fully trained R-GCNs in both node classification and link prediction settings.

Results

TaskDatasetMetricValueModel
Link PredictionFB15k-237Hits@10.157RR-GCN-PPV
Link PredictionFB15k-237Hits@100.412RR-GCN-PPV
Link PredictionFB15k-237Hits@30.256RR-GCN-PPV
Link PredictionFB15k-237MRR0.238RR-GCN-PPV
Node ClassificationAMPLUSAccuracy84.54RR-GCN-PPV
Node ClassificationAMPLUSAccuracy83.81R-GCN
Node ClassificationDBLPAccuracy70.61RR-GCN-PPV
Node ClassificationDBLPAccuracy68.51R-GCN
Node ClassificationAIFBAccuracy95.83RR-GCN-PPV-CUT
Node ClassificationAIFBAccuracy86.11RR-GCN-PPV
Node ClassificationMUTAGAccuracy79.41RR-GCN-PPV
Node ClassificationDMGFULLAccuracy63.38RR-GCN-PPV
Node ClassificationDMGFULLAccuracy57.52R-GCN
Node ClassificationAMAccuracy91.31RR-GCN-PPV-CUT (Unimportant relations removed)
Node ClassificationAMAccuracy84.8RR-GCN-PPV-CUT
Node ClassificationAMAccuracy84.65RR-GCN-PPV
Node ClassificationMDGENREAccuracy67.33R-GCN
Node ClassificationMDGENREAccuracy67.15RR-GCN-PPV
Node ClassificationDMG777KAccuracy63.97RR-GCN-PPV
Node ClassificationDMG777KAccuracy62.51R-GCN
Node ClassificationBGSAccuracy84.14RR-GCN-PPV-CUT
Node ClassificationBGSAccuracy78.97RR-GCN-PPV

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15