TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Strategies for Pre-training Graph Neural Networks

Strategies for Pre-training Graph Neural Networks

Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec

2019-05-29ICLR 2020 1Molecular Property PredictionRepresentation LearningOpen-Ended Question AnsweringGraph ClassificationProtein Function Prediction
PaperPDFCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCode

Abstract

Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naive strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.

Results

TaskDatasetMetricValueModel
Drug DiscoveryclintoxAUC0.726ContextPred
Drug DiscoveryBACEAUC0.845ContextPred
Drug DiscoveryTox21AUC0.781ContextPred
Drug DiscoveryHIV datasetAUC0.799ContextPred
Drug DiscoveryBBBPAUC0.687ContextPred
Drug DiscoveryToxCastAUC0.657ContextPred
Drug DiscoverySIDERAUC0.627ContextPred
Drug DiscoveryMUVAUC0.813ContextPred
Molecular Property PredictionFreeSolvRMSE2.764PretrainGNN
Molecular Property PredictionclintoxROC-AUC72.6PretrainGNN
Molecular Property PredictionToxCastROC-AUC65.7PretrainGNN
Molecular Property PredictionLipophilicityRMSE0.739PretrainGNN
Molecular Property PredictionQM7MAE113.2PretrainGNN
Molecular Property PredictionBBBPROC-AUC68.7PretrainGNN
Molecular Property PredictionQM9MAE0.00922PretrainGNN
Molecular Property PredictionQM8MAE0.02PretrainGNN
Molecular Property PredictionSIDERROC-AUC62.7PretrainGNN
Molecular Property PredictionTox21ROC-AUC78.1PretrainGNN
Molecular Property PredictionBACEROC-AUC84.5PretrainGNN
Atomistic DescriptionFreeSolvRMSE2.764PretrainGNN
Atomistic DescriptionclintoxROC-AUC72.6PretrainGNN
Atomistic DescriptionToxCastROC-AUC65.7PretrainGNN
Atomistic DescriptionLipophilicityRMSE0.739PretrainGNN
Atomistic DescriptionQM7MAE113.2PretrainGNN
Atomistic DescriptionBBBPROC-AUC68.7PretrainGNN
Atomistic DescriptionQM9MAE0.00922PretrainGNN
Atomistic DescriptionQM8MAE0.02PretrainGNN
Atomistic DescriptionSIDERROC-AUC62.7PretrainGNN
Atomistic DescriptionTox21ROC-AUC78.1PretrainGNN
Atomistic DescriptionBACEROC-AUC84.5PretrainGNN

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15Dual Dimensions Geometric Representation Learning Based Document Dewarping2025-07-11