Flood and Echo Net: Algorithmically Aligned GNNs that Generalize
- URL: http://arxiv.org/abs/2310.06970v3
- Date: Mon, 3 Jun 2024 10:50:08 GMT
- Title: Flood and Echo Net: Algorithmically Aligned GNNs that Generalize
- Authors: Joël Mathys, Florian Grötschla, Kalyan Varma Nadimpalli, Roger Wattenhofer,
- Abstract summary: We propose the Flood and Echo Net as a new type of Graph Neural Network.
The mechanism's inherent ability to generalize across graphs of varying sizes positions it as a practical architecture for the task of algorithmic learning.
We test the Flood and Echo Net on a variety of synthetic tasks and the SALSA-CLRS benchmark and find that the algorithmic alignment of the execution improves generalization to larger graph sizes.
- Score: 18.95453617434051
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Most Graph Neural Networks follow the standard message-passing framework where, in each step, all nodes simultaneously communicate with each other. We want to challenge this paradigm by aligning the computation more closely to the execution of distributed algorithms and propose the Flood and Echo Net. A single round of a Flood and Echo Net consists of an origin node and a flooding phase followed by an echo phase. First, during the flooding, messages are sent from the origin and propagated outwards throughout the entire graph. Then, during the echo, the message flow reverses and messages are sent back towards the origin. As nodes are only sparsely activated upon receiving a message, this leads to a wave-like activation pattern that traverses the graph. Through these sparse but parallel activations, the Net becomes more expressive than traditional MPNNs which are limited by the 1-WL test and also is provably more efficient in terms of message complexity. Moreover, the mechanism's inherent ability to generalize across graphs of varying sizes positions it as a practical architecture for the task of algorithmic learning. We test the Flood and Echo Net on a variety of synthetic tasks and the SALSA-CLRS benchmark and find that the algorithmic alignment of the execution improves generalization to larger graph sizes.
Related papers
- Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Preventing Representational Rank Collapse in MPNNs by Splitting the Computational Graph [9.498398257062641]
We show that operating on multiple directed acyclic graphs always satisfies our condition and propose to obtain these by defining a strict partial ordering of the nodes.
We conduct comprehensive experiments that confirm the benefits of operating on multi-relational graphs to achieve more informative node representations.
arXiv Detail & Related papers (2024-09-17T19:16:03Z) - Mesh-based Super-Resolution of Fluid Flows with Multiscale Graph Neural Networks [0.0]
A graph neural network (GNN) approach is introduced in this work which enables mesh-based three-dimensional super-resolution of fluid flows.
In this framework, the GNN is designed to operate not on the full mesh-based field at once, but on localized meshes of elements (or cells) directly.
arXiv Detail & Related papers (2024-09-12T05:52:19Z) - GGNNs : Generalizing GNNs using Residual Connections and Weighted
Message Passing [0.0]
GNNs excel at capturing relationships and patterns within graphs, enabling effective learning and prediction tasks.
It is commonly believed that the generalizing power of GNNs is attributed to the message-passing mechanism between layers.
Our technique builds on these results, modifying the message-passing mechanism further: one by weighing the messages before accumulating at each node and another by adding Residual connections.
arXiv Detail & Related papers (2023-11-26T22:22:38Z) - Cooperative Graph Neural Networks [7.2459816681395095]
A class of graph neural networks follow a standard message-passing paradigm.
We propose a novel framework for training graph neural networks.
Our approach offers a more flexible and dynamic message-passing paradigm.
arXiv Detail & Related papers (2023-10-02T15:08:52Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Shortest Path Networks for Graph Property Prediction [13.986963122264632]
Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood.
We propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods.
Our framework generalizes message passing neural networks, resulting in provably more expressive models.
arXiv Detail & Related papers (2022-06-02T12:04:29Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.