Neural ODEs with Irregular and Noisy Data
- URL: http://arxiv.org/abs/2205.09479v1
- Date: Thu, 19 May 2022 11:24:41 GMT
- Title: Neural ODEs with Irregular and Noisy Data
- Authors: Pawan Goyal and Peter Benner
- Abstract summary: We discuss a methodology to learn differential equation(s) using noisy and irregular sampled measurements.
In our methodology, the main innovation can be seen in the integration of deep neural networks with the neural ordinary differential equations (ODEs) approach.
The proposed framework to learn a model describing the vector field is highly effective under noisy measurements.
- Score: 8.349349605334316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Measurement noise is an integral part while collecting data of a physical
process. Thus, noise removal is necessary to draw conclusions from these data,
and it often becomes essential to construct dynamical models using these data.
We discuss a methodology to learn differential equation(s) using noisy and
irregular sampled measurements. In our methodology, the main innovation can be
seen in the integration of deep neural networks with the neural ordinary
differential equations (ODEs) approach. Precisely, we aim at learning a neural
network that provides (approximately) an implicit representation of the data
and an additional neural network that models the vector fields of the dependent
variables. We combine these two networks by constraining using neural ODEs. The
proposed framework to learn a model describing the vector field is highly
effective under noisy measurements. The approach can handle scenarios where
dependent variables are not available at the same temporal grid. Moreover, a
particular structure, e.g., second-order with respect to time, can easily be
incorporated. We demonstrate the effectiveness of the proposed method for
learning models using data obtained from various differential equations and
present a comparison with the neural ODE method that does not make any special
treatment to noise.
Related papers
- Foundational Inference Models for Dynamical Systems [5.549794481031468]
We offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs.
We propose a novel supervised learning framework for zero-shot time series imputation, through parametric functions satisfying some (hidden) ODEs.
We empirically demonstrate that one and the same (pretrained) recognition model can perform zero-shot imputation across 63 distinct time series with missing values.
arXiv Detail & Related papers (2024-02-12T11:48:54Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A PINN Approach to Symbolic Differential Operator Discovery with Sparse
Data [0.0]
In this work we perform symbolic discovery of differential operators in a situation where there is sparse experimental data.
We modify the PINN approach by adding a neural network that learns a representation of unknown hidden terms in the differential equation.
The algorithm yields both a surrogate solution to the differential equation and a black-box representation of the hidden terms.
arXiv Detail & Related papers (2022-12-09T02:09:37Z) - Learning differential equations from data [0.0]
In recent times, due to the abundance of data, there is an active search for data-driven methods to learn Differential equation models from data.
We propose a forward-Euler based neural network model and test its performance by learning ODEs from data using different number of hidden layers and different neural network width.
arXiv Detail & Related papers (2022-05-23T17:36:28Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint [9.36739413306697]
We discuss a methodology to learn differential equation(s) using noisy and sparsely sampled measurements.
In our methodology, the main innovation can be seen in of integration of deep neural networks with a classical numerical integration method.
arXiv Detail & Related papers (2021-09-23T15:43:45Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Estimating Vector Fields from Noisy Time Series [6.939768185086753]
We describe a neural network architecture consisting of tensor products of one-dimensional neural shape functions.
We find that the neural shape function architecture retains the approximation properties of dense neural networks.
We also study the combination of either our neural shape function method or existing differential equation learning methods with alternating minimization and multiple trajectories.
arXiv Detail & Related papers (2020-12-06T07:27:56Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.