TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Fine-tuning Graph Neural Networks by Preserving Graph Gene...

Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns

Yifei Sun, Qi Zhu, Yang Yang, Chunping Wang, Tianyu Fan, Jiajun Zhu, Lei Chen

2023-12-21Graph MiningTransfer LearningGraph ClassificationGraph Learning
PaperPDFCode(official)

Abstract

Recently, the paradigm of pre-training and fine-tuning graph neural networks has been intensively studied and applied in a wide range of graph mining tasks. Its success is generally attributed to the structural consistency between pre-training and downstream datasets, which, however, does not hold in many real-world scenarios. Existing works have shown that the structural divergence between pre-training and downstream graphs significantly limits the transferability when using the vanilla fine-tuning strategy. This divergence leads to model overfitting on pre-training graphs and causes difficulties in capturing the structural properties of the downstream graphs. In this paper, we identify the fundamental cause of structural divergence as the discrepancy of generative patterns between the pre-training and downstream graphs. Furthermore, we propose G-Tuning to preserve the generative patterns of downstream graphs. Given a downstream graph G, the core idea is to tune the pre-trained GNN so that it can reconstruct the generative patterns of G, the graphon W. However, the exact reconstruction of a graphon is known to be computationally expensive. To overcome this challenge, we provide a theoretical analysis that establishes the existence of a set of alternative graphons called graphon bases for any given graphon. By utilizing a linear combination of these graphon bases, we can efficiently approximate W. This theoretical finding forms the basis of our proposed model, as it enables effective learning of the graphon bases and their associated coefficients. Compared with existing algorithms, G-Tuning demonstrates an average improvement of 0.5% and 2.6% on in-domain and out-of-domain transfer learning experiments, respectively.

Results

TaskDatasetMetricValueModel
Graph ClassificationIMDb-BAccuracy (10-fold)74.3G-Tuning
Graph ClassificationSIDERROC-AUC61.4G-Tuning
Graph ClassificationREDDIT-12KAccuracy (10 fold)42.8G-Tuning
Graph ClassificationMSRC-21 (per-class)Accuracy (10 fold)11.01G-Tuning
Graph ClassificationIMDb-MAccuracy (10-fold)51.8G-Tuning
Graph ClassificationBACEROC-AUC84.79G-Tuning
Graph ClassificationclintoxROC-AUC74.64G-Tuning
Graph ClassificationMUVROC-AUC75.84G-Tuning
Graph ClassificationToxCastROC-AUC64.25G-Tuning
Graph ClassificationBBBPROC-AUC72.59G-Tuning
Graph ClassificationHIVROC-AUC77.33G-Tuning
Graph ClassificationMUTAGAccuracy (10 fold)86.14G-Tuning
Graph ClassificationPROTEINSAccuracy (10 fold)72.05G-Tuning
Graph ClassificationTox21ROC-AUC75.8G-Tuning
Graph ClassificationENZYMESAccuracy (10-fold)26.7G-Tuning
ClassificationIMDb-BAccuracy (10-fold)74.3G-Tuning
ClassificationSIDERROC-AUC61.4G-Tuning
ClassificationREDDIT-12KAccuracy (10 fold)42.8G-Tuning
ClassificationMSRC-21 (per-class)Accuracy (10 fold)11.01G-Tuning
ClassificationIMDb-MAccuracy (10-fold)51.8G-Tuning
ClassificationBACEROC-AUC84.79G-Tuning
ClassificationclintoxROC-AUC74.64G-Tuning
ClassificationMUVROC-AUC75.84G-Tuning
ClassificationToxCastROC-AUC64.25G-Tuning
ClassificationBBBPROC-AUC72.59G-Tuning
ClassificationHIVROC-AUC77.33G-Tuning
ClassificationMUTAGAccuracy (10 fold)86.14G-Tuning
ClassificationPROTEINSAccuracy (10 fold)72.05G-Tuning
ClassificationTox21ROC-AUC75.8G-Tuning
ClassificationENZYMESAccuracy (10-fold)26.7G-Tuning

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15A Graph-in-Graph Learning Framework for Drug-Target Interaction Prediction2025-07-15Graph World Model2025-07-14Federated Learning with Graph-Based Aggregation for Traffic Forecasting2025-07-13