Coupling Graph Neural Networks with Fractional Order Continuous
Dynamics: A Robustness Study
- URL: http://arxiv.org/abs/2401.04331v2
- Date: Mon, 4 Mar 2024 05:57:06 GMT
- Title: Coupling Graph Neural Networks with Fractional Order Continuous
Dynamics: A Robustness Study
- Authors: Qiyu Kang, Kai Zhao, Yang Song, Yihang Xie, Yanan Zhao, Sijie Wang,
Rui She, and Wee Peng Tay
- Abstract summary: We rigorously investigate the robustness of graph neural fractional-order differential equation (FDE) models.
This framework extends beyond traditional graph neural (integer-order) ordinary differential equation (ODE) models by implementing the time-fractional Caputo derivative.
- Score: 24.950680319986486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we rigorously investigate the robustness of graph neural
fractional-order differential equation (FDE) models. This framework extends
beyond traditional graph neural (integer-order) ordinary differential equation
(ODE) models by implementing the time-fractional Caputo derivative. Utilizing
fractional calculus allows our model to consider long-term memory during the
feature updating process, diverging from the memoryless Markovian updates seen
in traditional graph neural ODE models. The superiority of graph neural FDE
models over graph neural ODE models has been established in environments free
from attacks or perturbations. While traditional graph neural ODE models have
been verified to possess a degree of stability and resilience in the presence
of adversarial attacks in existing literature, the robustness of graph neural
FDE models, especially under adversarial conditions, remains largely
unexplored. This paper undertakes a detailed assessment of the robustness of
graph neural FDE models. We establish a theoretical foundation outlining the
robustness characteristics of graph neural FDE models, highlighting that they
maintain more stringent output perturbation bounds in the face of input and
graph topology disturbances, compared to their integer-order counterparts. Our
empirical evaluations further confirm the enhanced robustness of graph neural
FDE models, highlighting their potential in adversarially robust applications.
Related papers
- CGNSDE: Conditional Gaussian Neural Stochastic Differential Equation for Modeling Complex Systems and Data Assimilation [1.4322470793889193]
A new knowledge-based and machine learning hybrid modeling approach, called conditional neural differential equation (CGNSDE), is developed.
In contrast to the standard neural network predictive models, the CGNSDE is designed to effectively tackle both forward prediction tasks and inverse state estimation problems.
arXiv Detail & Related papers (2024-04-10T05:32:03Z) - GNRK: Graph Neural Runge-Kutta method for solving partial differential
equations [0.0]
This study introduces a novel approach called Graph Neural Runge-Kutta (GNRK)
GNRK integrates graph neural network modules with a recurrent structure inspired by the classical solvers.
It demonstrates the capability to address general PDEs, irrespective of initial conditions or PDE coefficients.
arXiv Detail & Related papers (2023-10-01T08:52:46Z) - Graph Neural Stochastic Differential Equations [3.568455515949288]
We present a novel model Graph Neural Differential Equations (Graph Neural SDEs)
This technique enhances the Graph Neural Ordinary Differential Equations (Graph Neural ODEs) by embedding randomness into data representation using Brownian motion.
We find that Latent Graph Neural SDEs surpass conventional models like Graph Convolutional Networks and Graph Neural ODEs, especially in confidence prediction.
arXiv Detail & Related papers (2023-08-23T09:20:38Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - On the Robustness of Graph Neural Diffusion to Topology Perturbations [30.284359808863588]
We show that graph neural PDEs are intrinsically more robust against topology perturbation as compared to other GNNs.
We propose a general graph neural PDE framework based on which a new class of robust GNNs can be defined.
arXiv Detail & Related papers (2022-09-16T07:19:35Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.