Uncertainty and Structure in Neural Ordinary Differential Equations
- URL: http://arxiv.org/abs/2305.13290v1
- Date: Mon, 22 May 2023 17:50:42 GMT
- Title: Uncertainty and Structure in Neural Ordinary Differential Equations
- Authors: Katharina Ott, Michael Tiemann, Philipp Hennig
- Abstract summary: We show that basic and lightweight Bayesian deep learning techniques like the Laplace approximation can be applied to neural ODEs.
We explore how mechanistic knowledge and uncertainty quantification interact on two recently proposed neural ODE frameworks.
- Score: 28.12033356095061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural ordinary differential equations (ODEs) are an emerging class of deep
learning models for dynamical systems. They are particularly useful for
learning an ODE vector field from observed trajectories (i.e., inverse
problems). We here consider aspects of these models relevant for their
application in science and engineering. Scientific predictions generally
require structured uncertainty estimates. As a first contribution, we show that
basic and lightweight Bayesian deep learning techniques like the Laplace
approximation can be applied to neural ODEs to yield structured and meaningful
uncertainty quantification. But, in the scientific domain, available
information often goes beyond raw trajectories, and also includes mechanistic
knowledge, e.g., in the form of conservation laws. We explore how mechanistic
knowledge and uncertainty quantification interact on two recently proposed
neural ODE frameworks - symplectic neural ODEs and physical models augmented
with neural ODEs. In particular, uncertainty reflects the effect of mechanistic
information more directly than the predictive power of the trained model could.
And vice versa, structure can improve the extrapolation abilities of neural
ODEs, a fact that can be best assessed in practice through uncertainty
estimates. Our experimental analysis demonstrates the effectiveness of the
Laplace approach on both low dimensional ODE problems and a high dimensional
partial differential equation.
Related papers
- Projected Neural Differential Equations for Learning Constrained Dynamics [3.570367665112327]
We introduce a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold.
PNDEs outperform existing methods while requiring fewer hyper parameters.
The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems.
arXiv Detail & Related papers (2024-10-31T06:32:43Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - Embedding Capabilities of Neural ODEs [0.0]
We study input-output relations of neural ODEs using dynamical systems theory.
We prove several results about the exact embedding of maps in different neural ODE architectures in low and high dimension.
arXiv Detail & Related papers (2023-08-02T15:16:34Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Distributional Gradient Matching for Learning Uncertain Neural Dynamics
Models [38.17499046781131]
We propose a novel approach towards estimating uncertain neural ODEs, avoiding the numerical integration bottleneck.
Our algorithm - distributional gradient matching (DGM) - jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss.
Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
arXiv Detail & Related papers (2021-06-22T08:40:51Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Bayesian Neural Ordinary Differential Equations [0.9422623204346027]
We demonstrate the successful integration of Neural ODEs with Bayesian inference frameworks.
We achieve a posterior sample accuracy of 98.5% on the test ensemble of 10,000 images.
This gives a scientific machine learning tool for probabilistic estimation of uncertainties.
arXiv Detail & Related papers (2020-12-14T04:05:26Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.