TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods

100 machine learning methods and techniques

AllAudioComputer VisionGeneralGraphsNatural Language ProcessingReinforcement LearningSequential

Contrastive Learning

GraphsIntroduced 20005057 papers

Graph Neural Network

GraphsIntroduced 20002694 papers

AWARE

Attentive Walk-Aggregating Graph Neural Network

We propose to theoretically and empirically examine the effect of incorporating weighting schemes into walk-aggregating GNNs. To this end, we propose a simple, interpretable, and end-to-end supervised GNN model, called AWARE (Attentive Walk-Aggregating GRaph Neural NEtwork), for graph-level prediction. AWARE aggregates the walk information by means of weighting schemes at distinct levels (vertex-, walk-, and graph-level) in a principled manner. By virtue of the incorporated weighting schemes at these different levels, AWARE can emphasize the information important for prediction while diminishing the irrelevant ones—leading to representations that can improve learning performance.

GraphsIntroduced 20001883 papers

GCN

Graph Convolutional Network

A Graph Convolutional Network, or GCN, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of convolutional neural networks which operate directly on graphs. The choice of convolutional architecture is motivated via a localized first-order approximation of spectral graph convolutions. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes.

GraphsIntroduced 2000969 papers

LapEigen

Laplacian EigenMap

GraphsIntroduced 2000297 papers

Laplacian PE

Laplacian Positional Encodings

Laplacian eigenvectors represent a natural generalization of the Transformer positional encodings (PE) for graphs as the eigenvectors of a discrete line (NLP graph) are the cosine and sinusoidal functions. They help encode distance-aware information (i.e., nearby nodes have similar positional features and farther nodes have dissimilar positional features). Hence, Laplacian Positional Encoding (PE) is a general method to encode node positions in a graph. For each node, its Laplacian PE is the k smallest non-trivial eigenvectors.

GraphsIntroduced 2000296 papers

DCNN

Diffusion-Convolutional Neural Networks

Diffusion-convolutional neural networks (DCNN) is a model for graph-structured data. Through the introduction of a diffusion-convolution operation, diffusion-based representations can be learned from graph structured data and used as an effective basis for node classification. Description and image from: Diffusion-Convolutional Neural Networks

GraphsIntroduced 2000277 papers

GAT

Graph Attention Network

A Graph Attention Network (GAT) is a neural network architecture that operates on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods’ features, a GAT enables (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. See here for an explanation by DGL.

GraphsIntroduced 2000197 papers

TuckER

TuckER

GraphsIntroduced 2000183 papers

GraphSAGE

GraphSAGE is a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Image from: Inductive Representation Learning on Large Graphs

GraphsIntroduced 2000132 papers

node2vec

node2vec is a framework for learning graph embeddings for nodes in graphs. Node2vec maximizes a likelihood objective over mappings which preserve neighbourhood distances in higher dimensional spaces. From an algorithm design perspective, node2vec exploits the freedom to define neighbourhoods for nodes and provide an explanation for the effect of the choice of neighborhood on the learned representations. For each node, node2vec simulates biased random walks based on an efficient network-aware search strategy and the nodes appearing in the random walk define neighbourhoods. The search strategy accounts for the relative influence nodes exert in a network. It also generalizes prior work alluding to naive search strategies by providing flexibility in exploring neighborhoods.

GraphsIntroduced 2000104 papers

MPNN

Message Passing Neural Network

There are at least eight notable examples of models from the literature that can be described using the Message Passing Neural Networks (MPNN) framework. For simplicity we describe MPNNs which operate on undirected graphs with node features and edge features . It is trivial to extend the formalism to directed multigraphs. The forward pass has two phases, a message passing phase and a readout phase. The message passing phase runs for time steps and is defined in terms of message functions and vertex update functions . During the message passing phase, hidden states at each node in the graph are updated based on messages according to where in the sum, denotes the neighbors of in graph . The readout phase computes a feature vector for the whole graph using some readout function according to The message functions , vertex update functions , and readout function are all learned differentiable functions. operates on the set of node states and must be invariant to permutations of the node states in order for the MPNN to be invariant to graph isomorphism.

GraphsIntroduced 200074 papers

TransE

TransE is an energy-based model that produces knowledge base embeddings. It models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Relationships are represented as translations in the embedding space: if holds, the embedding of the tail entity should be close to the embedding of the head entity plus some vector that depends on the relationship .

GraphsIntroduced 200073 papers

DeepWalk

DeepWalk learns embeddings (social representations) of a graph's vertices, by modeling a stream of short random walks. Social representations are latent features of the vertices that capture neighborhood similarity and community membership. These latent representations encode social relations in a continuous vector space with a relatively small number of dimensions. It generalizes neural language models to process a special language composed of a set of randomly-generated walks. The goal is to learn a latent representation, not only a probability distribution of node co-occurrences, and so as to introduce a mapping function . This mapping represents the latent social representation associated with each vertex in the graph. In practice, is represented by a matrix of free parameters.

GraphsIntroduced 200067 papers

HypE

Hyperboloid Embeddings

Hyperboloid Embeddings (HypE) is a novel self-supervised dynamic reasoning framework, that utilizes positive first-order existential queries on a KG to learn representations of its entities and relations as hyperboloids in a Poincaré ball. HypE models the positive first-order queries as geometrical translation (t), intersection (), and union (). For the problem of KG reasoning in real-world datasets, the proposed HypE model significantly outperforms the state-of-the art results. HypE is also applied to an anomaly detection task on a popular e-commerce website product taxonomy as well as hierarchically organized web articles and demonstrate significant performance improvements compared to existing baseline methods. Finally, HypE embeddings can also be visualized in a Poincaré ball to clearly interpret and comprehend the representation space.

GraphsIntroduced 200049 papers

LightGCN

LightGCN is a type of graph convolutional neural network (GCN), including only the most essential component in GCN (neighborhood aggregation) for collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings learned at all layers as the final embedding.

GraphsIntroduced 200046 papers

GIN

Graph Isomorphism Network

Per the authors, Graph Isomorphism Network (GIN) generalizes the WL test and hence achieves maximum discriminative power among GNNs.

GraphsIntroduced 200043 papers

VERSE

VERtex Similarity Embeddings

VERtex Similarity Embeddings (VERSE) is a simple, versatile, and memory-efficient method that derives graph embeddings explicitly calibrated to preserve the distributions of a selected vertex-to-vertex similarity measure. VERSE learns such embeddings by training a single-layer neural network. Source: Tsitsulin et al. Image source: Tsitsulin et al.

GraphsIntroduced 200040 papers

ARMA

ARMA GNN

The ARMA GNN layer implements a rational graph filter with a recursive approximation.

GraphsIntroduced 200038 papers

RotatE

RotatE is a method for generating graph embeddings which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. The RotatE model is trained using a self-adversarial negative sampling technique.

GraphsIntroduced 200029 papers

MoNet

Mixture model network

Mixture model network (MoNet) is a general framework allowing to design convolutional deep architectures on non-Euclidean domains such as graphs and manifolds. Image and description from: Geometric deep learning on graphs and manifolds using mixture model CNNs

GraphsIntroduced 200025 papers

VGAE

Variational Graph Auto Encoder

GraphsIntroduced 200023 papers

RGCN

Relational Graph Convolution Network

An RGCN, or Relational Graph Convolution Network, is a an application of the GCN framework to modeling relational data, specifically to link prediction and entity classification tasks. See here for an in-depth explanation of RGCNs by DGL.

GraphsIntroduced 200022 papers

GNS

Graph Network-based Simulators

Graph Network-Based Simulators is a type of graph neural network that represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.

GraphsIntroduced 200021 papers

SchNet

Schrödinger Network

SchNet is an end-to-end deep neural network architecture based on continuous-filter convolutions. It follows the deep tensor neural network framework, i.e. atom-wise representations are constructed by starting from embedding vectors that characterize the atom type before introducing the configuration of the system by a series of interaction blocks.

GraphsIntroduced 200020 papers

RESCAL

RESCAL

GraphsIntroduced 200017 papers

GCA

Graph Contrastive learning with Adaptive augmentation

GraphsIntroduced 200016 papers

HOPE

High-Order Proximity preserved Embedding

GraphsIntroduced 200016 papers

TGN

Temporal Graph Network

Temporal Graph Network, or TGN, is a framework for deep learning on dynamic graphs represented as sequences of timed events. The memory (state) of the model at time consists of a vector for each node the model has seen so far. The memory of a node is updated after an event (e.g. interaction with another node or node-wise change), and its purpose is to represent the node's history in a compressed format. Thanks to this specific module, TGNs have the capability to memorize long term dependencies for each node in the graph. When a new node is encountered, its memory is initialized as the zero vector, and it is then updated for each event involving the node, even after the model has finished training.

GraphsIntroduced 200016 papers

GIC

Graph InfoClust

GraphsIntroduced 200014 papers

MEI

Multi-partition Embedding Interaction

MEI introduces the multi-partition embedding interaction technique with block term tensor format to systematically address the efficiency--expressiveness trade-off in knowledge graph embedding. It divides the embedding vector into multiple partitions and learns the local interaction patterns from data instead of using fixed special patterns as in ComplEx or SimplE models. This enables MEI to achieve optimal efficiency--expressiveness trade-off, not just being fully expressive. Previous methods such as TuckER, RESCAL, DistMult, ComplEx, and SimplE are suboptimal restricted special cases of MEI.

GraphsIntroduced 200014 papers

CGNN

Crystal Graph Neural Network

The full architecture of CGNN is presented at CGNN's official site.

GraphsIntroduced 200012 papers

RDF2Vec

GraphsIntroduced 200012 papers

DGI

Deep Graph Infomax

Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. Description and image from: DEEP GRAPH INFOMAX

GraphsIntroduced 200010 papers

metapath2vec

GraphsIntroduced 20009 papers

APPNP

Approximation of Personalized Propagation of Neural Predictions

Neural message-passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighbourhood is hard to extend. This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters is on par or lower than previous models. It leverages a large, adjustable neighbourhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models.

GraphsIntroduced 20009 papers

GATv2

Graph Attention Network v2

The GATv2 operator from the “How Attentive are Graph Attention Networks?” paper, which fixes the static attention problem of the standard GAT layer: since the linear layers in the standard GAT are applied right after each other, the ranking of attended nodes is unconditioned on the query node. In contrast, in GATv2, every node can attend to any other node. GATv2 scoring function:

GraphsIntroduced 20009 papers

AGCN

Adaptive Graph Convolutional Neural Networks

AGCN is a novel spectral graph convolution network that feed on original data of diverse graph structures. Image credit: Adaptive Graph Convolutional Neural Networks

GraphsIntroduced 20008 papers

GraphSAINT

Graph sampling based inductive learning method

Scalable method to train large scale GNN models via sampling small subgraphs.

GraphsIntroduced 20007 papers

GraphCL

Graph contrastive learning with augmentations

GraphsIntroduced 20007 papers

GCNII

GCNII is an extension of a Graph Convolution Networks with two new techniques, initial residual and identify mapping, to tackle the problem of oversmoothing -- where stacking more layers and adding non-linearity tends to degrade performance. At each layer, initial residual constructs a skip connection from the input layer, while identity mapping adds an identity matrix to the weight matrix.

GraphsIntroduced 20007 papers

DiffPool

DiffPool is a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Description and image from: Hierarchical Graph Representation Learning with Differentiable Pooling

GraphsIntroduced 20007 papers

ChebNet

ChebNet involves a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Description from: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

GraphsIntroduced 20006 papers

DAGNN

Directed Acyclic Graph Neural Network

A GNN for dags, which injects their topological order as an inductive bias via asynchronous message passing.

GraphsIntroduced 20006 papers

PNA

Principal Neighbourhood Aggregation

Principal Neighbourhood Aggregation (PNA) is a general and flexible architecture for graphs combining multiple aggregators with degree-scalers (which generalize the sum aggregator).

GraphsIntroduced 20005 papers

GraRep

Graph Representation with Global structure

GraphsIntroduced 20005 papers

BiGCN

Bi-Directional Graph Convolutional Network

GraphsIntroduced 20004 papers

iGCL

Implicit Graph Contrastive Learning

Please enter a description about the method here

GraphsIntroduced 20004 papers

ComplEx-N3

ComplEx with N3 Regularizer

ComplEx model trained with a nuclear norm regularizer

GraphsIntroduced 20004 papers

L-GCN

Learnable adjacency matrix GCN

Graph structure is learnable

GraphsIntroduced 20003 papers
Page 1 of 2Next