TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Graph Entropy Minimization for Semi-supervised Node Classi...

Graph Entropy Minimization for Semi-supervised Node Classification

Yi Luo, Guangchun Luo, Ke Qin, Aiguo Chen

2023-05-31Node ClassificationClassificationKnowledge Distillation
PaperPDFCode(official)

Abstract

Node classifiers are required to comprehensively reduce prediction errors, training resources, and inference latency in the industry. However, most graph neural networks (GNN) concentrate only on one or two of them. The compromised aspects thus are the shortest boards on the bucket, hindering their practical deployments for industrial-level tasks. This work proposes a novel semi-supervised learning method termed Graph Entropy Minimization (GEM) to resolve the three issues simultaneously. GEM benefits its one-hop aggregation from massive uncategorized nodes, making its prediction accuracy comparable to GNNs with two or more hops message passing. It can be decomposed to support stochastic training with mini-batches of independent edge samples, achieving extremely fast sampling and space-saving training. While its one-hop aggregation is faster in inference than deep GNNs, GEM can be further accelerated to an extreme by deriving a non-hop classifier via online knowledge distillation. Thus, GEM can be a handy choice for latency-restricted and error-sensitive services running on resource-constraint hardware. Code is available at https://github.com/cf020031308/GEM.

Results

TaskDatasetMetricValueModel
Node ClassificationPubMed with Public Split: fixed 20 nodes per classAccuracy79.91Graph-MLP
Node ClassificationPubMed with Public Split: fixed 20 nodes per classAccuracy78.48GEM
Node ClassificationCiteSeer with Public Split: fixed 20 nodes per classAccuracy74.2GEM
Node ClassificationCiteSeer with Public Split: fixed 20 nodes per classAccuracy73.53OKDEEM
Node ClassificationCiteSeer with Public Split: fixed 20 nodes per classAccuracy72.63EEM

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14