TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Graph Transformers without Positional Encodings

Graph Transformers without Positional Encodings

Ayush Garg

2024-01-31Graph Representation LearningRepresentation LearningGraph RegressionGraph ClassificationNode Classification
PaperPDF

Abstract

Recently, Transformers for graph representation learning have become increasingly popular, achieving state-of-the-art performance on a wide-variety of graph datasets, either alone or in combination with message-passing graph neural networks (MP-GNNs). Infusing graph inductive-biases in the innately structure-agnostic transformer architecture in the form of structural or positional encodings (PEs) is key to achieving these impressive results. However, designing such encodings is tricky and disparate attempts have been made to engineer such encodings including Laplacian eigenvectors, relative random-walk probabilities (RRWP), spatial encodings, centrality encodings, edge encodings etc. In this work, we argue that such encodings may not be required at all, provided the attention mechanism itself incorporates information about the graph structure. We introduce Eigenformer, a Graph Transformer employing a novel spectrum-aware attention mechanism cognizant of the Laplacian spectrum of the graph, and empirically show that it achieves performance competetive with SOTA Graph Transformers on a number of standard GNN benchmarks. Additionally, we theoretically prove that Eigenformer can express various graph structural connectivity matrices, which is particularly essential when learning over smaller graphs.

Results

TaskDatasetMetricValueModel
Graph RegressionZINCMAE0.077EIGENFORMER
Graph RegressionPeptides-structMAE0.2599EIGENFORMER
Graph ClassificationMNISTAccuracy98.362EIGENFORMER
Graph ClassificationCIFAR10 100kAccuracy (%)70.194EIGENFORMER
Graph ClassificationPeptides-funcAP0.6414EIGENFORMER
Node ClassificationPATTERNAccuracy86.738EIGENFORMER
Node ClassificationCLUSTERAccuracy77.456EIGENFORMER
ClassificationMNISTAccuracy98.362EIGENFORMER
ClassificationCIFAR10 100kAccuracy (%)70.194EIGENFORMER
ClassificationPeptides-funcAP0.6414EIGENFORMER

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15