A Robust SINDy Approach by Combining Neural Networks and an Integral
Form
- URL: http://arxiv.org/abs/2309.07193v1
- Date: Wed, 13 Sep 2023 10:50:04 GMT
- Title: A Robust SINDy Approach by Combining Neural Networks and an Integral
Form
- Authors: Ali Forootani, Pawan Goyal, and Peter Benner
- Abstract summary: We propose a robust method to discover governing equations from noisy and scarce data.
We use neural networks to learn an implicit representation based on measurement data.
We obtain the derivative information required for SINDy using an automatic differentiation tool.
- Score: 8.950469063443332
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The discovery of governing equations from data has been an active field of
research for decades. One widely used methodology for this purpose is sparse
regression for nonlinear dynamics, known as SINDy. Despite several attempts,
noisy and scarce data still pose a severe challenge to the success of the SINDy
approach. In this work, we discuss a robust method to discover nonlinear
governing equations from noisy and scarce data. To do this, we make use of
neural networks to learn an implicit representation based on measurement data
so that not only it produces the output in the vicinity of the measurements but
also the time-evolution of output can be described by a dynamical system.
Additionally, we learn such a dynamic system in the spirit of the SINDy
framework. Leveraging the implicit representation using neural networks, we
obtain the derivative information -- required for SINDy -- using an automatic
differentiation tool. To enhance the robustness of our methodology, we further
incorporate an integral condition on the output of the implicit networks.
Furthermore, we extend our methodology to handle data collected from multiple
initial conditions. We demonstrate the efficiency of the proposed methodology
to discover governing equations under noisy and scarce data regimes by means of
several examples and compare its performance with existing methods.
Related papers
- Discovering Governing equations from Graph-Structured Data by Sparse Identification of Nonlinear Dynamical Systems [0.27624021966289597]
We develop a new method called Sparse Identification of Dynamical Systems from Graph-structured data (SINDyG)
SINDyG incorporates the network structure into sparse regression to identify model parameters that explain the underlying network dynamics.
arXiv Detail & Related papers (2024-09-02T17:51:37Z) - GN-SINDy: Greedy Sampling Neural Network in Sparse Identification of Nonlinear Partial Differential Equations [1.104960878651584]
We introduce the greedy sampling neural network in sparse identification of nonlinear partial differential equations (GN-SINDy)
GN-SINDy blends a greedy sampling method, the neural network, and the SINDy algorithm.
In the implementation phase, to show the effectiveness of GN-SINDy, we compare its results with DeePyMoD by using a Python package.
arXiv Detail & Related papers (2024-05-14T13:56:12Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Neural ODEs with Irregular and Noisy Data [8.349349605334316]
We discuss a methodology to learn differential equation(s) using noisy and irregular sampled measurements.
In our methodology, the main innovation can be seen in the integration of deep neural networks with the neural ordinary differential equations (ODEs) approach.
The proposed framework to learn a model describing the vector field is highly effective under noisy measurements.
arXiv Detail & Related papers (2022-05-19T11:24:41Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint [9.36739413306697]
We discuss a methodology to learn differential equation(s) using noisy and sparsely sampled measurements.
In our methodology, the main innovation can be seen in of integration of deep neural networks with a classical numerical integration method.
arXiv Detail & Related papers (2021-09-23T15:43:45Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.