Discovering Governing Equations from Partial Measurements with Deep
Delay Autoencoders
- URL: http://arxiv.org/abs/2201.05136v1
- Date: Thu, 13 Jan 2022 18:48:16 GMT
- Title: Discovering Governing Equations from Partial Measurements with Deep
Delay Autoencoders
- Authors: Joseph Bakarji, Kathleen Champion, J. Nathan Kutz and Steven L.
Brunton
- Abstract summary: A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important.
Here, we design a custom deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space.
We demonstrate this approach on the Lorenz, R"ossler, and Lotka-Volterra systems, learning dynamics from a single measurement variable.
- Score: 4.446017969073817
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A central challenge in data-driven model discovery is the presence of hidden,
or latent, variables that are not directly measured but are dynamically
important. Takens' theorem provides conditions for when it is possible to
augment these partial measurements with time delayed information, resulting in
an attractor that is diffeomorphic to that of the original full-state system.
However, the coordinate transformation back to the original attractor is
typically unknown, and learning the dynamics in the embedding space has
remained an open challenge for decades. Here, we design a custom deep
autoencoder network to learn a coordinate transformation from the delay
embedded space into a new space where it is possible to represent the dynamics
in a sparse, closed form. We demonstrate this approach on the Lorenz,
R\"ossler, and Lotka-Volterra systems, learning dynamics from a single
measurement variable. As a challenging example, we learn a Lorenz analogue from
a single scalar variable extracted from a video of a chaotic waterwheel
experiment. The resulting modeling framework combines deep learning to uncover
effective coordinates and the sparse identification of nonlinear dynamics
(SINDy) for interpretable modeling. Thus, we show that it is possible to
simultaneously learn a closed-form model and the associated coordinate system
for partially observed dynamics.
Related papers
- From pixels to planning: scale-free active inference [42.04471916762639]
This paper describes a discrete state-space model -- and accompanying methods -- for generative modelling.
We consider deep or hierarchical forms using the renormalisation group.
This technical note illustrates the automatic discovery, learning and deployment of RGMs using a series of applications.
arXiv Detail & Related papers (2024-07-27T14:20:48Z) - Generalizable Implicit Neural Representation As a Universal Spatiotemporal Traffic Data Learner [46.866240648471894]
Spatiotemporal Traffic Data (STTD) measures the complex dynamical behaviors of the multiscale transportation system.
We present a novel paradigm to address the STTD learning problem by parameterizing STTD as an implicit neural representation.
We validate its effectiveness through extensive experiments in real-world scenarios, showcasing applications from corridor to network scales.
arXiv Detail & Related papers (2024-06-13T02:03:22Z) - Autoencoders for discovering manifold dimension and coordinates in data
from complex dynamical systems [0.0]
Autoencoder framework combines implicit regularization with internal linear layers and $L$ regularization (weight decay)
We show that this framework can be naturally extended for applications of state-space modeling and forecasting.
arXiv Detail & Related papers (2023-05-01T21:14:47Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Uncovering Closed-form Governing Equations of Nonlinear Dynamics from
Videos [8.546520029145853]
We introduce a novel end-to-end unsupervised deep learning framework to uncover the mathematical structure of equations that governs the dynamics of moving objects in videos.
Such an architecture consists of (1) an encoder-decoder network that learns low-dimensional spatial/pixel coordinates of the moving object, (2) a learnable Spatial-Physical Transformation component that creates mapping between the extracted spatial/pixel coordinates and the latent physical states of dynamics, and (3) a numerical integrator-based sparse regression module that uncovers the parsimonious closed-form governing equations of learned physical states.
arXiv Detail & Related papers (2021-06-09T02:50:11Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Model discovery in the sparse sampling regime [0.0]
We show how deep learning can improve model discovery of partial differential equations.
As a result, deep learning-based model discovery allows to recover the underlying equations.
We illustrate our claims on both synthetic and experimental sets.
arXiv Detail & Related papers (2021-05-02T06:27:05Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.