Deep Learning Enhanced Dynamic Mode Decomposition
- URL: http://arxiv.org/abs/2108.04433v1
- Date: Tue, 10 Aug 2021 03:54:23 GMT
- Title: Deep Learning Enhanced Dynamic Mode Decomposition
- Authors: Christopher W. Curtis, Daniel Jay Alford-Lago, Opal Issan
- Abstract summary: We use convolutional autoencoder networks to simultaneously find optimal families of observables.
We also generate both accurate embeddings of the flow into a space of observables and immersions of the observables back into flow coordinates.
This network results in a global transformation of the flow and affords future state prediction via EDMD and the decoder network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Koopman operator theory shows how nonlinear dynamical systems can be
represented as an infinite-dimensional, linear operator acting on a Hilbert
space of observables of the system. However, determining the relevant modes and
eigenvalues of this infinite-dimensional operator can be difficult. The
extended dynamic mode decomposition (EDMD) is one such method for generating
approximations to Koopman spectra and modes, but the EDMD method faces its own
set of challenges due to the need of user defined observables. To address this
issue, we explore the use of convolutional autoencoder networks to
simultaneously find optimal families of observables which also generate both
accurate embeddings of the flow into a space of observables and immersions of
the observables back into flow coordinates. This network results in a global
transformation of the flow and affords future state prediction via EDMD and the
decoder network. We call this method deep learning dynamic mode decomposition
(DLDMD). The method is tested on canonical nonlinear data sets and is shown to
produce results that outperform a standard DMD approach.
Related papers
- On the relationship between Koopman operator approximations and neural ordinary differential equations for data-driven time-evolution predictions [0.0]
We show that extended dynamic mode decomposition with dictionary learning (EDMD-DL) is equivalent to a neural network representation of the nonlinear discrete-time flow map on the state space.
We implement several variations of neural ordinary differential equations (ODEs) and EDMD-DL, developed by combining different aspects of their respective model structures and training procedures.
We evaluate these methods using numerical experiments on chaotic dynamics in the Lorenz system and a nine-mode model of turbulent shear flow.
arXiv Detail & Related papers (2024-11-20T00:18:46Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Rigged Dynamic Mode Decomposition: Data-Driven Generalized Eigenfunction Decompositions for Koopman Operators [0.0]
We introduce the Rigged Dynamic Mode Decomposition (Rigged DMD) algorithm, which computes generalized eigenfunction decompositions of Koopman operators.
Rigged DMD addresses challenges with a data-driven methodology that approximates the Koopman operator's resolvent and its generalized eigenfunctions.
We provide examples, including systems with a Lebesgue spectrum, integrable Hamiltonian systems, the Lorenz system, and a high-Reynolds number lid-driven flow in a two-dimensional square cavity.
arXiv Detail & Related papers (2024-05-01T18:00:18Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Beyond expectations: Residual Dynamic Mode Decomposition and Variance
for Stochastic Dynamical Systems [8.259767785187805]
Dynamic Mode Decomposition (DMD) is the poster child of projection-based methods.
We introduce the concept of variance-pseudospectra to gauge statistical coherency.
Our study concludes with practical applications using both simulated and experimental data.
arXiv Detail & Related papers (2023-08-21T13:05:12Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Learned Lifted Linearization Applied to Unstable Dynamic Systems Enabled
by Koopman Direct Encoding [11.650381752104296]
It is known that DMD and other data-driven methods face a fundamental difficulty in constructing a Koopman model when applied to unstable systems.
Here we solve the problem by incorporating knowledge about a nonlinear state equation with a learning method for finding an effective set of observables.
The proposed method shows a dramatic improvement over existing DMD and data-driven methods.
arXiv Detail & Related papers (2022-10-24T20:55:46Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z) - Estimating Koopman operators for nonlinear dynamical systems: a
nonparametric approach [77.77696851397539]
The Koopman operator is a mathematical tool that allows for a linear description of non-linear systems.
In this paper we capture their core essence as a dual version of the same framework, incorporating them into the Kernel framework.
We establish a strong link between kernel methods and Koopman operators, leading to the estimation of the latter through Kernel functions.
arXiv Detail & Related papers (2021-03-25T11:08:26Z) - Extraction of Discrete Spectra Modes from Video Data Using a Deep
Convolutional Koopman Network [0.0]
Recent deep learning extensions in Koopman theory have enabled compact, interpretable representations of nonlinear dynamical systems.
Deep Koopman networks attempt to learn the Koopman eigenfunctions which capture the coordinate transformation to globally linearize system dynamics.
We demonstrate the ability of a deep convolutional Koopman network (CKN) in automatically identifying independent modes for dynamical systems with discrete spectra.
arXiv Detail & Related papers (2020-10-19T06:26:29Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.