TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Attention Mechanism in Graph Neural Networks via...

Improving Attention Mechanism in Graph Neural Networks via Cardinality Preservation

Shuo Zhang, Lei Xie

2019-07-04Graph Representation LearningGraph ClassificationNode Classification
PaperPDFCode(official)

Abstract

Graph Neural Networks (GNNs) are powerful to learn the representation of graph-structured data. Most of the GNNs use the message-passing scheme, where the embedding of a node is iteratively updated by aggregating the information of its neighbors. To achieve a better expressive capability of node influences, attention mechanism has grown to be popular to assign trainable weights to the nodes in aggregation. Though the attention-based GNNs have achieved remarkable results in various tasks, a clear understanding of their discriminative capacities is missing. In this work, we present a theoretical analysis of the representational properties of the GNN that adopts the attention mechanism as an aggregator. Our analysis determines all cases when those attention-based GNNs can always fail to distinguish certain distinct structures. Those cases appear due to the ignorance of cardinality information in attention-based aggregation. To improve the performance of attention-based GNNs, we propose cardinality preserved attention (CPA) models that can be applied to any kind of attention mechanisms. Our experiments on node and graph classification confirm our theoretical analysis and show the competitive performance of our CPA models.

Results

TaskDatasetMetricValueModel
Graph ClassificationREDDIT-BAccuracy92.57GAT-GC (f-Scaled)
Graph ClassificationENZYMESAccuracy58.45GAT-GC (f-Scaled)
ClassificationREDDIT-BAccuracy92.57GAT-GC (f-Scaled)
ClassificationENZYMESAccuracy58.45GAT-GC (f-Scaled)

Related Papers

SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Permutation Equivariant Neural Controlled Differential Equations for Dynamic Graph Representation Learning2025-06-25Demystifying Distributed Training of Graph Neural Networks for Link Prediction2025-06-25Heterogeneous Temporal Hypergraph Neural Network2025-06-18Equivariance Everywhere All At Once: A Recipe for Graph Foundation Models2025-06-17Density-aware Walks for Coordinated Campaign Detection2025-06-16Delving into Instance-Dependent Label Noise in Graph Data: A Comprehensive Study and Benchmark2025-06-14Graph Semi-Supervised Learning for Point Classification on Data Manifolds2025-06-13