GREAD: Graph Neural Reaction-Diffusion Networks
- URL: http://arxiv.org/abs/2211.14208v3
- Date: Wed, 14 Jun 2023 22:53:38 GMT
- Title: GREAD: Graph Neural Reaction-Diffusion Networks
- Authors: Jeongwhan Choi, Seoyoung Hong, Noseong Park, Sung-Bae Cho
- Abstract summary: Graph neural networks (GNNs) are one of the most popular research topics for deep learning.
diffusion equations have been widely used for designing the core processing layer of GNNs.
We present a reaction-diffusion equation-based GNN method that considers all popular types of reaction equations.
- Score: 22.90737022395036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are one of the most popular research topics for
deep learning. GNN methods typically have been designed on top of the graph
signal processing theory. In particular, diffusion equations have been widely
used for designing the core processing layer of GNNs, and therefore they are
inevitably vulnerable to the notorious oversmoothing problem. Recently, a
couple of papers paid attention to reaction equations in conjunctions with
diffusion equations. However, they all consider limited forms of reaction
equations. To this end, we present a reaction-diffusion equation-based GNN
method that considers all popular types of reaction equations in addition to
one special reaction equation designed by us. To our knowledge, our paper is
one of the most comprehensive studies on reaction-diffusion equation-based
GNNs. In our experiments with 9 datasets and 28 baselines, our method, called
GREAD, outperforms them in a majority of cases. Further synthetic data
experiments show that it mitigates the oversmoothing problem and works well for
various homophily rates.
Related papers
- ChemHGNN: A Hierarchical Hypergraph Neural Network for Reaction Virtual Screening and Discovery [19.298076697406977]
ChemHGNN is a hypergraph neural network framework that captures high-order relationships in reaction networks.<n>Our work establishes HGNNs as a superior alternative to GNNs for reaction virtual screening and discovery, offering a chemically informed framework for accelerating reaction discovery.
arXiv Detail & Related papers (2025-05-21T04:58:25Z) - Spiking Graph Neural Network on Riemannian Manifolds [51.15400848660023]
Graph neural networks (GNNs) have become the dominant solution for learning on graphs.
Existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry.
We present a Manifold-valued Spiking GNN (MSG)
MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
arXiv Detail & Related papers (2024-10-23T15:09:02Z) - Graph Neural Reaction Diffusion Models [14.164952387868341]
We propose a novel family of Reaction GNNs based on neural RD systems.
We discuss the theoretical properties of our RDGNN, its implementation, and show that it improves or offers competitive performance to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-16T09:46:58Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - A Generalized Neural Diffusion Framework on Graphs [36.867530311300925]
We propose a general diffusion equation framework with the fidelity term, which formally establishes the relationship between the diffusion process with more GNNs.
With the high-order diffusion equation, HiD-Net is more robust against attacks and works on both homophily and heterophily graphs.
arXiv Detail & Related papers (2023-12-14T02:41:12Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Novel DNNs for Stiff ODEs with Applications to Chemically Reacting Flows [0.0]
Chemically reacting flows are common in engineering, such as hypersonic flow, combustion, explosions, manufacturing processes and environmental assessments.
For combustion, the number of reactions can be significant (over 100) and due to the very large CPU requirements a large number of flow and combustion problems are presently beyond the capabilities of even the largest supercomputers.
Motivated by this, novel Deep Neural Networks (DNNs) are introduced to approximate stiff ODEs.
arXiv Detail & Related papers (2021-04-01T22:54:22Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.