A charge-preserving method for solving graph neural diffusion networks
- URL: http://arxiv.org/abs/2312.10279v1
- Date: Sat, 16 Dec 2023 01:06:31 GMT
- Title: A charge-preserving method for solving graph neural diffusion networks
- Authors: Lidia Aceto and Pietro Antonio Grassi
- Abstract summary: This paper gives a systematic mathematical interpretation of the diffusion problem on which Graph Neural Networks (GNNs) models are based.
Knowing the charge values and their conservation along the evolution flow could provide a way to understand how GNNs and other networks work with their learning capabilities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The aim of this paper is to give a systematic mathematical interpretation of
the diffusion problem on which Graph Neural Networks (GNNs) models are based.
The starting point of our approach is a dissipative functional leading to
dynamical equations which allows us to study the symmetries of the model. We
discuss the conserved charges and provide a charge-preserving numerical method
for solving the dynamical equations. In any dynamical system and also in GRAph
Neural Diffusion (GRAND), knowing the charge values and their conservation
along the evolution flow could provide a way to understand how GNNs and other
networks work with their learning capabilities.
Related papers
- Graph Neural Reaction Diffusion Models [14.164952387868341]
We propose a novel family of Reaction GNNs based on neural RD systems.
We discuss the theoretical properties of our RDGNN, its implementation, and show that it improves or offers competitive performance to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-16T09:46:58Z) - Graph Neural Aggregation-diffusion with Metastability [4.040326569845733]
Continuous graph neural models based on differential equations have expanded the architecture of graph neural networks (GNNs)
We propose GRADE inspired by graph aggregation-diffusion equations, which includes the delicate balance between nonlinear diffusion and aggregation induced by interaction potentials.
We prove that GRADE achieves competitive performance across various benchmarks and alleviates the over-smoothing issue in GNNs.
arXiv Detail & Related papers (2024-03-29T15:05:57Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Signed Graph Neural Ordinary Differential Equation for Modeling
Continuous-time Dynamics [13.912268915939656]
The prevailing approach of integrating graph neural networks with ordinary differential equations has demonstrated promising performance.
We introduce a novel approach: a signed graph neural ordinary differential equation, adeptly addressing the limitations of miscapturing signed information.
Our proposed solution boasts both flexibility and efficiency.
arXiv Detail & Related papers (2023-12-18T13:45:33Z) - Resilient Graph Neural Networks: A Coupled Dynamical Systems Approach [12.856220339384269]
Graph Neural Networks (GNNs) have established themselves as a key component in addressing diverse graph-based tasks.
Despite their notable successes, GNNs remain susceptible to input perturbations in the form of adversarial attacks.
This paper introduces an innovative approach to fortify GNNs against adversarial perturbations through the lens of coupled dynamical systems.
arXiv Detail & Related papers (2023-11-12T20:06:48Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Geometric Knowledge Distillation: Topology Compression for Graph Neural
Networks [80.8446673089281]
We study a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs)
We propose Neural Heat Kernel (NHK) to encapsulate the geometric property of the underlying manifold concerning the architecture of GNNs.
A fundamental and principled solution is derived by aligning NHKs on teacher and student models, dubbed as Geometric Knowledge Distillation.
arXiv Detail & Related papers (2022-10-24T08:01:58Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Artificial neural network as a universal model of nonlinear dynamical
systems [0.0]
The map is built as an artificial neural network whose weights encode a modeled system.
We consider the Lorenz system, the Roessler system and also Hindmarch-Rose neuron.
High similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunovs exponents.
arXiv Detail & Related papers (2021-03-06T16:02:41Z) - E(n) Equivariant Graph Neural Networks [86.75170631724548]
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs)
In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance.
arXiv Detail & Related papers (2021-02-19T10:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.