Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint
- URL: http://arxiv.org/abs/2109.11446v1
- Date: Thu, 23 Sep 2021 15:43:45 GMT
- Title: Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint
- Authors: Pawan Goyal and Peter Benner
- Abstract summary: We discuss a methodology to learn differential equation(s) using noisy and sparsely sampled measurements.
In our methodology, the main innovation can be seen in of integration of deep neural networks with a classical numerical integration method.
- Score: 9.36739413306697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Measurement noise is an integral part while collecting data of a physical
process. Thus, noise removal is a necessary step to draw conclusions from these
data, and it often becomes quite essential to construct dynamical models using
these data. We discuss a methodology to learn differential equation(s) using
noisy and sparsely sampled measurements. In our methodology, the main
innovation can be seen in of integration of deep neural networks with a
classical numerical integration method. Precisely, we aim at learning a neural
network that implicitly represents the data and an additional neural network
that models the vector fields of the dependent variables. We combine these two
networks by enforcing the constraint that the data at the next time-steps can
be given by following a numerical integration scheme such as the fourth-order
Runge-Kutta scheme. The proposed framework to learn a model predicting the
vector field is highly effective under noisy measurements. The approach can
handle scenarios where dependent variables are not available at the same
temporal grid. We demonstrate the effectiveness of the proposed method to
learning models using data obtained from various differential equations. The
proposed approach provides a promising methodology to learn dynamic models,
where the first-principle understanding remains opaque.
Related papers
- Learning of networked spreading models from noisy and incomplete data [7.669018800404791]
We introduce a universal learning method based on scalable dynamic message-passing technique.
The algorithm leverages available prior knowledge on the model and on the data, and reconstructs both network structure and parameters of a spreading model.
We show that a linear computational complexity of the method with the key model parameters makes the algorithm scalable to large network instances.
arXiv Detail & Related papers (2023-12-20T13:12:47Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - A Robust SINDy Approach by Combining Neural Networks and an Integral
Form [8.950469063443332]
We propose a robust method to discover governing equations from noisy and scarce data.
We use neural networks to learn an implicit representation based on measurement data.
We obtain the derivative information required for SINDy using an automatic differentiation tool.
arXiv Detail & Related papers (2023-09-13T10:50:04Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Deep Active Learning with Noise Stability [24.54974925491753]
Uncertainty estimation for unlabeled data is crucial to active learning.
We propose a novel algorithm that leverages noise stability to estimate data uncertainty.
Our method is generally applicable in various tasks, including computer vision, natural language processing, and structural data analysis.
arXiv Detail & Related papers (2022-05-26T13:21:01Z) - Neural ODEs with Irregular and Noisy Data [8.349349605334316]
We discuss a methodology to learn differential equation(s) using noisy and irregular sampled measurements.
In our methodology, the main innovation can be seen in the integration of deep neural networks with the neural ordinary differential equations (ODEs) approach.
The proposed framework to learn a model describing the vector field is highly effective under noisy measurements.
arXiv Detail & Related papers (2022-05-19T11:24:41Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.