Multiplicative Dynamic Mode Decomposition
- URL: http://arxiv.org/abs/2405.05334v1
- Date: Wed, 8 May 2024 18:09:16 GMT
- Title: Multiplicative Dynamic Mode Decomposition
- Authors: Nicolas Boullé, Matthew J. Colbrook,
- Abstract summary: We introduce Multiplicative Dynamic Mode Decomposition (MultDMD), which enforces the multiplicative structure inherent in the Koopman operator within its finite-dimensional approximation.
MultDMD presents a structured approach to finite-dimensional approximations and can accurately reflect the spectral properties of the Koopman operator.
We elaborate on the theoretical framework of MultDMD, detailing its formulation, optimization strategy, and convergence properties.
- Score: 4.028503203417233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Koopman operators are infinite-dimensional operators that linearize nonlinear dynamical systems, facilitating the study of their spectral properties and enabling the prediction of the time evolution of observable quantities. Recent methods have aimed to approximate Koopman operators while preserving key structures. However, approximating Koopman operators typically requires a dictionary of observables to capture the system's behavior in a finite-dimensional subspace. The selection of these functions is often heuristic, may result in the loss of spectral information, and can severely complicate structure preservation. This paper introduces Multiplicative Dynamic Mode Decomposition (MultDMD), which enforces the multiplicative structure inherent in the Koopman operator within its finite-dimensional approximation. Leveraging this multiplicative property, we guide the selection of observables and define a constrained optimization problem for the matrix approximation, which can be efficiently solved. MultDMD presents a structured approach to finite-dimensional approximations and can more accurately reflect the spectral properties of the Koopman operator. We elaborate on the theoretical framework of MultDMD, detailing its formulation, optimization strategy, and convergence properties. The efficacy of MultDMD is demonstrated through several examples, including the nonlinear pendulum, the Lorenz system, and fluid dynamics data, where we demonstrate its remarkable robustness to noise.
Related papers
- Rigged Dynamic Mode Decomposition: Data-Driven Generalized Eigenfunction Decompositions for Koopman Operators [0.0]
We introduce the Rigged Dynamic Mode Decomposition (Rigged DMD) algorithm, which computes generalized eigenfunction decompositions of Koopman operators.
Rigged DMD addresses challenges with a data-driven methodology that approximates the Koopman operator's resolvent and its generalized eigenfunctions.
We provide examples, including systems with a Lebesgue spectrum, integrable Hamiltonian systems, the Lorenz system, and a high-Reynolds number lid-driven flow in a two-dimensional square cavity.
arXiv Detail & Related papers (2024-05-01T18:00:18Z) - On the Convergence of Hermitian Dynamic Mode Decomposition [4.028503203417233]
We study the convergence of Hermitian Dynamic Mode Decomposition to the spectral properties of self-adjoint Koopman operators.
We numerically demonstrate our results by applying them to two-dimensional Schr"odinger equations.
arXiv Detail & Related papers (2024-01-06T11:13:16Z) - Beyond expectations: Residual Dynamic Mode Decomposition and Variance
for Stochastic Dynamical Systems [8.259767785187805]
Dynamic Mode Decomposition (DMD) is the poster child of projection-based methods.
We introduce the concept of variance-pseudospectra to gauge statistical coherency.
Our study concludes with practical applications using both simulated and experimental data.
arXiv Detail & Related papers (2023-08-21T13:05:12Z) - GFlowNet-EM for learning compositional latent variable models [115.96660869630227]
A key tradeoff in modeling the posteriors over latents is between expressivity and tractable optimization.
We propose the use of GFlowNets, algorithms for sampling from an unnormalized density.
By training GFlowNets to sample from the posterior over latents, we take advantage of their strengths as amortized variational algorithms.
arXiv Detail & Related papers (2023-02-13T18:24:21Z) - The mpEDMD Algorithm for Data-Driven Computations of Measure-Preserving
Dynamical Systems [0.0]
We introduce measure-preserving extended dynamic mode decomposition ($textttmpEDMD$), the first truncation method whose eigendecomposition converges to the spectral quantities of Koopman operators.
$textttmpEDMD$ is flexible and easy to use with any pre-existing DMD-type method, and with different types of data.
arXiv Detail & Related papers (2022-09-06T06:37:54Z) - Spectral Decomposition Representation for Reinforcement Learning [100.0424588013549]
We propose an alternative spectral method, Spectral Decomposition Representation (SPEDER), that extracts a state-action abstraction from the dynamics without inducing spurious dependence on the data collection policy.
A theoretical analysis establishes the sample efficiency of the proposed algorithm in both the online and offline settings.
An experimental investigation demonstrates superior performance over current state-of-the-art algorithms across several benchmarks.
arXiv Detail & Related papers (2022-08-19T19:01:30Z) - Residual Dynamic Mode Decomposition: Robust and verified Koopmanism [0.0]
Dynamic Mode Decomposition (DMD) describes complex dynamic processes through a hierarchy of simpler coherent features.
We present Residual Dynamic Mode Decomposition (ResDMD), which overcomes challenges through the data-driven computation of residuals associated with the full infinite-dimensional Koopman operator.
ResDMD computes spectra and pseudospectra of general Koopman operators with error control, and computes smoothed approximations of spectral measures (including continuous spectra) with explicit high-order convergence theorems.
arXiv Detail & Related papers (2022-05-19T18:02:44Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Estimating Koopman operators for nonlinear dynamical systems: a
nonparametric approach [77.77696851397539]
The Koopman operator is a mathematical tool that allows for a linear description of non-linear systems.
In this paper we capture their core essence as a dual version of the same framework, incorporating them into the Kernel framework.
We establish a strong link between kernel methods and Koopman operators, leading to the estimation of the latter through Kernel functions.
arXiv Detail & Related papers (2021-03-25T11:08:26Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.