Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics
- URL: http://arxiv.org/abs/2012.06191v1
- Date: Fri, 11 Dec 2020 08:34:26 GMT
- Title: Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics
- Authors: Tomoharu Iwata, Yoshinobu Kawahara
- Abstract summary: We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
- Score: 49.41640137945938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Koopman spectral analysis has attracted attention for understanding nonlinear
dynamical systems by which we can analyze nonlinear dynamics with a linear
regime by lifting observations using a nonlinear function. For analysis, we
need to find an appropriate lift function. Although several methods have been
proposed for estimating a lift function based on neural networks, the existing
methods train neural networks without spectral analysis. In this paper, we
propose neural dynamic mode decomposition, in which neural networks are trained
such that the forecast error is minimized when the dynamics is modeled based on
spectral decomposition in the lifted space. With our proposed method, the
forecast error is backpropagated through the neural networks and the spectral
decomposition, enabling end-to-end learning of Koopman spectral analysis. When
information is available on the frequencies or the growth rates of the
dynamics, the proposed method can exploit it as regularizers for training. We
also propose an extension of our approach when observations are influenced by
exogenous control time-series. Our experiments demonstrate the effectiveness of
our proposed method in terms of eigenvalue estimation and forecast performance.
Related papers
- Neural Harmonium: An Interpretable Deep Structure for Nonlinear Dynamic
System Identification with Application to Audio Processing [4.599180419117645]
Interpretability helps us understand a model's ability to generalize and reveal its limitations.
We introduce a causal interpretable deep structure for modeling dynamic systems.
Our proposed model makes use of the harmonic analysis by modeling the system in a time-frequency domain.
arXiv Detail & Related papers (2023-10-10T21:32:15Z) - Constraining Chaos: Enforcing dynamical invariants in the training of
recurrent neural networks [0.0]
We introduce a novel training method for machine learning based forecasting methods for chaotic dynamical systems.
The training enforces dynamical invariants--such as the Lyapunov exponent spectrum and fractal dimension--in the systems of interest, enabling longer and more stable forecasts when operating with limited data.
arXiv Detail & Related papers (2023-04-24T00:33:47Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks [25.051918587650636]
Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences.
We propose an unsupervised MTHU algorithm based on variational recurrent neural networks.
arXiv Detail & Related papers (2023-03-19T04:51:34Z) - Spectral learning of Bernoulli linear dynamical systems models [21.3534487101893]
We develop a learning method for fast, efficient fitting of latent linear dynamical system models.
Our approach extends traditional subspace identification methods to the Bernoulli setting.
We show that the estimator provides real world settings by analyzing data from mice performing a sensory decision-making task.
arXiv Detail & Related papers (2023-03-03T16:29:12Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Recurrent Neural Network Training with Convex Loss and Regularization
Functions by Extended Kalman Filtering [0.20305676256390928]
We show that the learning method outperforms gradient descent in a nonlinear system identification benchmark.
We also explore the use of the algorithm in data-driven nonlinear model predictive control and its relation with disturbance models for offset-free tracking.
arXiv Detail & Related papers (2021-11-04T07:49:15Z) - Meta-Learning for Koopman Spectral Analysis with Short Time-series [49.41640137945938]
Existing methods require long time-series for training neural networks.
We propose a meta-learning method for estimating embedding functions from unseen short time-series.
We experimentally demonstrate that the proposed method achieves better performance in terms of eigenvalue estimation and future prediction.
arXiv Detail & Related papers (2021-02-09T07:19:19Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.