Graph Neural Stochastic Differential Equations
- URL: http://arxiv.org/abs/2308.12316v1
- Date: Wed, 23 Aug 2023 09:20:38 GMT
- Title: Graph Neural Stochastic Differential Equations
- Authors: Richard Bergna, Felix Opolka, Pietro Li\`o, Jose Miguel
Hernandez-Lobato
- Abstract summary: We present a novel model Graph Neural Differential Equations (Graph Neural SDEs)
This technique enhances the Graph Neural Ordinary Differential Equations (Graph Neural ODEs) by embedding randomness into data representation using Brownian motion.
We find that Latent Graph Neural SDEs surpass conventional models like Graph Convolutional Networks and Graph Neural ODEs, especially in confidence prediction.
- Score: 3.568455515949288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel model Graph Neural Stochastic Differential Equations
(Graph Neural SDEs). This technique enhances the Graph Neural Ordinary
Differential Equations (Graph Neural ODEs) by embedding randomness into data
representation using Brownian motion. This inclusion allows for the assessment
of prediction uncertainty, a crucial aspect frequently missed in current
models. In our framework, we spotlight the \textit{Latent Graph Neural SDE}
variant, demonstrating its effectiveness. Through empirical studies, we find
that Latent Graph Neural SDEs surpass conventional models like Graph
Convolutional Networks and Graph Neural ODEs, especially in confidence
prediction, making them superior in handling out-of-distribution detection
across both static and spatio-temporal contexts.
Related papers
- Uncertainty Modeling in Graph Neural Networks via Stochastic Differential Equations [14.422150854883453]
We introduce Latent Graph Neural Differential Equations (LGNSDE) which enhance GNODE by embedding randomness through Brownian motion to quantify uncertainty.
We provide theoretical guarantees for LGNSDE and empirically show better performance in uncertainty quantification.
arXiv Detail & Related papers (2024-08-28T19:59:58Z) - Coupling Graph Neural Networks with Fractional Order Continuous
Dynamics: A Robustness Study [24.950680319986486]
We rigorously investigate the robustness of graph neural fractional-order differential equation (FDE) models.
This framework extends beyond traditional graph neural (integer-order) ordinary differential equation (ODE) models by implementing the time-fractional Caputo derivative.
arXiv Detail & Related papers (2024-01-09T02:56:52Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Learning Dynamic Graph Embeddings with Neural Controlled Differential
Equations [21.936437653875245]
This paper focuses on representation learning for dynamic graphs with temporal interactions.
We propose a generic differential model for dynamic graphs that characterises the continuously dynamic evolution of node embedding trajectories.
Our framework exhibits several desirable characteristics, including the ability to express dynamics on evolving graphs without integration by segments.
arXiv Detail & Related papers (2023-02-22T12:59:38Z) - Graph Sequential Neural ODE Process for Link Prediction on Dynamic and
Sparse Graphs [33.294977897987685]
Link prediction on dynamic graphs is an important task in graph mining.
Existing approaches based on dynamic graph neural networks (DGNNs) typically require a significant amount of historical data.
We propose a novel method based on the neural process, called Graph Sequential Neural ODE Process (GSNOP)
arXiv Detail & Related papers (2022-11-15T23:21:02Z) - On the Robustness of Graph Neural Diffusion to Topology Perturbations [30.284359808863588]
We show that graph neural PDEs are intrinsically more robust against topology perturbation as compared to other GNNs.
We propose a general graph neural PDE framework based on which a new class of robust GNNs can be defined.
arXiv Detail & Related papers (2022-09-16T07:19:35Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Non-separable Spatio-temporal Graph Kernels via SPDEs [90.62347738138594]
We provide graph kernels for principled-temporal modelling on graphs.
By providing novel tools for modelling on graphs, we outperform pre-existing graph kernels in real-world applications.
arXiv Detail & Related papers (2021-11-16T14:53:19Z) - Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs [77.33781731432163]
We learn dynamic graph representation in hyperbolic space, for the first time, which aims to infer node representations.
We present a novel Hyperbolic Variational Graph Network, referred to as HVGNN.
In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach.
arXiv Detail & Related papers (2021-04-06T01:44:15Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.