TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Combining Label Propagation and Simple Models Out-performs...

Combining Label Propagation and Simple Models Out-performs Graph Neural Networks

Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin R. Benson

2020-10-27ICLR 2021 1Node Classification on Non-Homophilic (Heterophilic) GraphsNode ClassificationNode Property Prediction
PaperPDFCodeCodeCodeCodeCodeCode(official)Code

Abstract

Graph Neural Networks (GNNs) are the predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an "error correlation" that spreads residual errors in training data to correct errors in test data and (ii) a "prediction correlation" that smooths the predictions on the test data. We call this overall procedure Correct and Smooth (C&S), and the post-processing steps are implemented via simple modifications to standard label propagation techniques from early graph-based semi-supervised learning methods. Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks, with just a small fraction of the parameters and orders of magnitude faster runtime. For instance, we exceed the best known GNN performance on the OGB-Products dataset with 137 times fewer parameters and greater than 100 times less training time. The performance of our methods highlights how directly incorporating label information into the learning algorithm (as was done in traditional techniques) yields easy and substantial performance gains. We can also incorporate our techniques into big GNN models, providing modest gains. Our code for the OGB results is at https://github.com/Chillee/CorrectAndSmooth.

Results

TaskDatasetMetricValueModel
Node Property Predictionogbn-arxivNumber of params1441580GAT(norm.adj.)+label reuse+C&S
Node Property Predictionogbn-arxivNumber of params1567000GAT + C&S
Node Property Predictionogbn-arxivNumber of params155824GCN_res + C&S_v2
Node Property Predictionogbn-arxivNumber of params175656MLP + C&S
Node Property Predictionogbn-arxivNumber of params155824GCN_res + C&S
Node Property Predictionogbn-arxivNumber of params15400Linear + C&S
Node Property Predictionogbn-arxivNumber of params5160Plain Linear + C&S
Node Property Predictionogbn-productsNumber of params406063Spec-MLP-Wide + C&S
Node Property Predictionogbn-productsNumber of params96247MLP + C&S
Node Property Predictionogbn-productsNumber of params10763Linear + C&S
Node Property Predictionogbn-productsNumber of params4747Plain Linear + C&S
Node Property Predictionogbn-productsNumber of params753622GAT w/NS + C&S
Node Property Predictionogbn-productsNumber of params207919GraphSAGE w/NS + C&S

Related Papers

Demystifying Distributed Training of Graph Neural Networks for Link Prediction2025-06-25Equivariance Everywhere All At Once: A Recipe for Graph Foundation Models2025-06-17Delving into Instance-Dependent Label Noise in Graph Data: A Comprehensive Study and Benchmark2025-06-14Graph Semi-Supervised Learning for Point Classification on Data Manifolds2025-06-13Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols2025-06-11Wasserstein Hypergraph Neural Network2025-06-11Mitigating Degree Bias Adaptively with Hard-to-Learn Nodes in Graph Contrastive Learning2025-06-05iN2V: Bringing Transductive Node Embeddings to Inductive Graphs2025-06-05