TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Node Feature Extraction by Self-Supervised Multi-scale Nei...

Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction

Eli Chien, Wei-Cheng Chang, Cho-Jui Hsieh, Hsiang-Fu Yu, Jiong Zhang, Olgica Milenkovic, Inderjit S Dhillon

2021-10-29ICLR 2022 4Extreme Multi-Label ClassificationSelf-Supervised LearningNode Property PredictionMulti-Label ClassificationLanguage ModellingLink Prediction
PaperPDFCodeCodeCode(official)Code

Abstract

Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take numerical node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related learning tasks. Recent works exploring the correlation between numerical node features and graph structure via self-supervised learning have paved the way for further performance improvements of GNNs. However, methods used for extracting numerical node features from raw data are still graph-agnostic within standard GNN pipelines. This practice is sub-optimal as it prevents one from fully utilizing potential correlations between graph topology and node attributes. To mitigate this issue, we propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT). GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information, and scales to large datasets. We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework. We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets: For example, we improve the accuracy of the top-ranked method GAMLP from $68.25\%$ to $69.67\%$, SGC from $63.29\%$ to $66.10\%$ and MLP from $47.24\%$ to $61.10\%$ on the ogbn-papers100M dataset by leveraging GIANT.

Results

TaskDatasetMetricValueModel
Node Property Predictionogbn-arxivNumber of params1304912GIANT-XRT+RevGAT+KD (use raw text)
Node Property Predictionogbn-arxivNumber of params546344GIANT-XRT+GraphSAGE (use raw text)
Node Property Predictionogbn-arxivNumber of params273960GIANT-XRT+MLP (use raw text)
Node Property Predictionogbn-papers100MNumber of params21551631GIANT-XRT+GAMLP+RLU (use raw text)
Node Property Predictionogbn-productsNumber of params1548382GIANT-XRT+SAGN+SLE+C&S (use raw text)
Node Property Predictionogbn-productsNumber of params1548382GIANT-XRT+SAGN+SLE (use raw text)
Node Property Predictionogbn-productsNumber of params417583GIANT-XRT+GraphSAINT(use raw text)
Node Property Predictionogbn-productsNumber of params275759GIANT-XRT+MLP (use raw text)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16