TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SCR: Training Graph Neural Networks with Consistency Regul...

SCR: Training Graph Neural Networks with Consistency Regularization

Chenhui Zhang, Yufei He, Yukuo Cen, Zhenyu Hou, Wenzheng Feng, Yuxiao Dong, Xu Cheng, Hongyun Cai, Feng He, Jie Tang

2021-12-08Node ClassificationNode Property Prediction
PaperPDFCode(official)Code(official)CodeCode

Abstract

We present the SCR framework for enhancing the training of graph neural networks (GNNs) with consistency regularization. Regularization is a set of strategies used in Machine Learning to reduce overfitting and improve the generalization ability. However, it is unclear how to best design the generalization strategies in GNNs, as it works in a semi-supervised setting for graph data. The major challenge lies in how to efficiently balance the trade-off between the error from the labeled data and that from the unlabeled data. SCR is a simple yet general framework in which we introduce two strategies of consistency regularization to address the challenge above. One is to minimize the disagreements among the perturbed predictions by different versions of a GNN model. The other is to leverage the Mean Teacher paradigm to estimate a consistency loss between teacher and student models instead of the disagreement of the predictions. We conducted experiments on three large-scale node classification datasets in the Open Graph Benchmark (OGB). Experimental results demonstrate that the proposed SCR framework is a general one that can enhance various GNNs to achieve better performance. Finally, SCR has been the top-1 entry on all three OGB leaderboards as of this submission.

Results

TaskDatasetMetricValueModel
Node Property Predictionogbn-papers100MNumber of params67560875GAMLP+RLU+SCR
Node Property Predictionogbn-papers100MNumber of params67560875GAMLP+SCR-m
Node Property Predictionogbn-papers100MNumber of params67560875GAMLP+SCR
Node Property Predictionogbn-productsNumber of params1154654GIANT-XRT+SAGN+SCR+C&S
Node Property Predictionogbn-productsNumber of params1154654GIANT-XRT+SAGN+MCR+C&S
Node Property Predictionogbn-productsNumber of params1154654GIANT-XRT+SAGN+SCR
Node Property Predictionogbn-productsNumber of params1154654GIANT-XRT+SAGN+MCR
Node Property Predictionogbn-productsNumber of params2144151GIANT-XRT+GAMLP+MCR
Node Property Predictionogbn-productsNumber of params3335831GAMLP+RLU+SCR+C&S
Node Property Predictionogbn-productsNumber of params3335831GAMLP+RLU+SCR
Node Property Predictionogbn-productsNumber of params3335831GAMLP+MCR
Node Property Predictionogbn-productsNumber of params2179678SAGN+MCR
Node Property Predictionogbn-magNumber of params6734882NARS-GAMLP+RLU+SCR
Node Property Predictionogbn-magNumber of params6734882NARS-GAMLP+SCR-m
Node Property Predictionogbn-magNumber of params6734882NARS-GAMLP+SCR

Related Papers

Demystifying Distributed Training of Graph Neural Networks for Link Prediction2025-06-25Equivariance Everywhere All At Once: A Recipe for Graph Foundation Models2025-06-17Delving into Instance-Dependent Label Noise in Graph Data: A Comprehensive Study and Benchmark2025-06-14Graph Semi-Supervised Learning for Point Classification on Data Manifolds2025-06-13Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols2025-06-11Wasserstein Hypergraph Neural Network2025-06-11Mitigating Degree Bias Adaptively with Hard-to-Learn Nodes in Graph Contrastive Learning2025-06-05iN2V: Bringing Transductive Node Embeddings to Inductive Graphs2025-06-05