Half-Hop: A graph upsampling approach for slowing down message passing
- URL: http://arxiv.org/abs/2308.09198v1
- Date: Thu, 17 Aug 2023 22:24:15 GMT
- Title: Half-Hop: A graph upsampling approach for slowing down message passing
- Authors: Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin,
Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veli\v{c}kovi\'c, Eva L. Dyer
- Abstract summary: We introduce a framework for improving learning in message passing neural networks.
Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge.
Our method only modifies the input graph, making it plug-and-play and easy to use with existing models.
- Score: 31.26080679115766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message passing neural networks have shown a lot of success on
graph-structured data. However, there are many instances where message passing
can lead to over-smoothing or fail when neighboring nodes belong to different
classes. In this work, we introduce a simple yet general framework for
improving learning in message passing neural networks. Our approach essentially
upsamples edges in the original graph by adding "slow nodes" at each edge that
can mediate communication between a source and a target node. Our method only
modifies the input graph, making it plug-and-play and easy to use with existing
models. To understand the benefits of slowing down message passing, we provide
theoretical and empirical analyses. We report results on several supervised and
self-supervised benchmarks, and show improvements across the board, notably in
heterophilic conditions where adjacent nodes are more likely to have different
labels. Finally, we show how our approach can be used to generate augmentations
for self-supervised learning, where slow nodes are randomly introduced into
different edges in the graph to generate multi-scale views with variable path
lengths.
Related papers
- Meta-GPS++: Enhancing Graph Meta-Learning with Contrastive Learning and Self-Training [22.473322546354414]
We propose a novel framework for few-shot node classification called Meta-GPS++.
We first adopt an efficient method to learn discriminative node representations on homophilic and heterophilic graphs.
We also apply self-training to extract valuable information from unlabeled nodes.
arXiv Detail & Related papers (2024-07-20T03:05:12Z) - GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks [2.4175455407547015]
Graph neural networks learn to represent nodes by aggregating information from their neighbors.
Several existing methods address this by sampling a small subset of nodes, scaling GNNs to much larger graphs.
We introduce GRAPES, an adaptive sampling method that learns to identify the set of nodes crucial for training a GNN.
arXiv Detail & Related papers (2023-10-05T09:08:47Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Framelet Message Passing [2.479720095773358]
We propose a new message passing based on multiscale framelet transforms, called Framelet Message Passing.
It integrates framelet representation of neighbor nodes from multiple hops away in node message update.
We also propose a continuous message passing using neural ODE solvers.
arXiv Detail & Related papers (2023-02-28T17:56:19Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Shortest Path Networks for Graph Property Prediction [13.986963122264632]
Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood.
We propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods.
Our framework generalizes message passing neural networks, resulting in provably more expressive models.
arXiv Detail & Related papers (2022-06-02T12:04:29Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Modeling Graph Structure via Relative Position for Text Generation from
Knowledge Graphs [54.176285420428776]
We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation.
With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns.
Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph.
arXiv Detail & Related papers (2020-06-16T15:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.