ACMP: Allen-Cahn Message Passing for Graph Neural Networks with Particle
Phase Transition
- URL: http://arxiv.org/abs/2206.05437v3
- Date: Mon, 24 Apr 2023 01:52:14 GMT
- Title: ACMP: Allen-Cahn Message Passing for Graph Neural Networks with Particle
Phase Transition
- Authors: Yuelin Wang, Kai Yi, Xinliang Liu, Yu Guang Wang, Shi Jin
- Abstract summary: We introduce Allen-Cahn message passing (ACMP) for graph neural networks.
The dynamics of the system is a reaction-diffusion process which can separate particles without blowing up.
It provides a model of GNNs circumventing the common GNN problem of oversmoothing messages.
- Score: 24.59894322312533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural message passing is a basic feature extraction unit for
graph-structured data considering neighboring node features in network
propagation from one layer to the next. We model such process by an interacting
particle system with attractive and repulsive forces and the Allen-Cahn force
arising in the modeling of phase transition. The dynamics of the system is a
reaction-diffusion process which can separate particles without blowing up.
This induces an Allen-Cahn message passing (ACMP) for graph neural networks
where the numerical iteration for the particle system solution constitutes the
message passing propagation. ACMP which has a simple implementation with a
neural ODE solver can propel the network depth up to one hundred of layers with
theoretically proven strictly positive lower bound of the Dirichlet energy. It
thus provides a deep model of GNNs circumventing the common GNN problem of
oversmoothing. GNNs with ACMP achieve state of the art performance for
real-world node classification tasks on both homophilic and heterophilic
datasets.
Related papers
- Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Scalable and Consistent Graph Neural Networks for Distributed Mesh-based Data-driven Modeling [0.0]
This work develops a distributed graph neural network (GNN) methodology for mesh-based modeling applications.
consistency refers to the fact that a GNN trained and evaluated on one rank (one large graph) is arithmetically equivalent to evaluations on multiple ranks (a partitioned graph)
It is shown how the NekRS mesh partitioning can be linked to the distributed GNN training and inference routines, resulting in a scalable mesh-based data-driven modeling workflow.
arXiv Detail & Related papers (2024-10-02T15:22:27Z) - Neural Message Passing Induced by Energy-Constrained Diffusion [79.9193447649011]
We propose an energy-constrained diffusion model as a principled interpretable framework for understanding the mechanism of MPNNs.
We show that the new model can yield promising performance for cases where the data structures are observed (as a graph), partially observed or completely unobserved.
arXiv Detail & Related papers (2024-09-13T17:54:41Z) - Bundle Neural Networks for message diffusion on graphs [10.018379001231356]
We show that Bundle Neural Networks (BuNNs) can approximate any feature transformation over nodes on any graphs given injective positional encodings.
We also prove that BuNNs can approximate any feature transformation over nodes on any family of graphs given injective positional encodings, resulting in universal node-level expressivity.
arXiv Detail & Related papers (2024-05-24T13:28:48Z) - Neighborhood Convolutional Network: A New Paradigm of Graph Neural
Networks for Node Classification [12.062421384484812]
Graph Convolutional Network (GCN) decouples neighborhood aggregation and feature transformation in each convolutional layer.
In this paper, we propose a new paradigm of GCN, termed Neighborhood Convolutional Network (NCN)
In this way, the model could inherit the merit of decoupled GCN for aggregating neighborhood information, at the same time, develop much more powerful feature learning modules.
arXiv Detail & Related papers (2022-11-15T02:02:51Z) - Transformer with Implicit Edges for Particle-based Physics Simulation [135.77656965678196]
Transformer with Implicit Edges (TIE) captures the rich semantics of particle interactions in an edge-free manner.
We evaluate our model on diverse domains of varying complexity and materials.
arXiv Detail & Related papers (2022-07-22T03:45:29Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Resonant tunnelling diode nano-optoelectronic spiking nodes for
neuromorphic information processing [0.0]
We introduce an optoelectronic artificial neuron capable of operating at ultrafast rates and with low energy consumption.
The proposed system combines an excitable tunnelling diode (RTD) element with a nanoscale light source.
arXiv Detail & Related papers (2021-07-14T14:11:04Z) - A tensor network representation of path integrals: Implementation and
analysis [0.0]
We introduce a novel tensor network-based decomposition of path integral simulations involving Feynman-Vernon influence functional.
The finite temporarily non-local interactions introduced by the influence functional can be captured very efficiently using matrix product state representation.
The flexibility of the AP-TNPI framework makes it a promising new addition to the family of path integral methods for non-equilibrium quantum dynamics.
arXiv Detail & Related papers (2021-06-23T16:41:54Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.