Generalizing Dynamic Mode Decomposition: Balancing Accuracy and
Expressiveness in Koopman Approximations
- URL: http://arxiv.org/abs/2108.03712v1
- Date: Sun, 8 Aug 2021 19:11:41 GMT
- Title: Generalizing Dynamic Mode Decomposition: Balancing Accuracy and
Expressiveness in Koopman Approximations
- Authors: Masih Haseli, Jorge Cort\'es
- Abstract summary: This paper tackles the data-driven approximation of unknown dynamical systems using Koopman-operator methods.
We propose the Tunable Symmetric Subspace Decomposition algorithm to refine the dictionary.
We provide a full characterization of the algorithm properties and show that it generalizes both Extended Dynamic Mode Decomposition and Symmetric Subspace Decomposition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper tackles the data-driven approximation of unknown dynamical systems
using Koopman-operator methods. Given a dictionary of functions, these methods
approximate the projection of the action of the operator on the
finite-dimensional subspace spanned by the dictionary. We propose the Tunable
Symmetric Subspace Decomposition algorithm to refine the dictionary, balancing
its expressiveness and accuracy. Expressiveness corresponds to the ability of
the dictionary to describe the evolution of as many observables as possible and
accuracy corresponds to the ability to correctly predict their evolution. Based
on the observation that Koopman-invariant subspaces give rise to exact
predictions, we reason that prediction accuracy is a function of the degree of
invariance of the subspace generated by the dictionary and provide a
data-driven measure to measure invariance proximity. The proposed algorithm
iteratively prunes the initial functional space to identify a refined
dictionary of functions that satisfies the desired level of accuracy while
retaining as much of the original expressiveness as possible. We provide a full
characterization of the algorithm properties and show that it generalizes both
Extended Dynamic Mode Decomposition and Symmetric Subspace Decomposition.
Simulations on planar systems show the effectiveness of the proposed methods in
producing Koopman approximations of tunable accuracy that capture relevant
information about the dynamical system.
Related papers
- Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Learning Invariant Subspaces of Koopman Operators--Part 2: Heterogeneous
Dictionary Mixing to Approximate Subspace Invariance [0.0]
This work builds on the models and concepts presented in part 1 to learn approximate dictionary representations of Koopman operators from data.
We show that structured mixing of heterogeneous dictionary functions achieve the same accuracy and dimensional scaling as the deep-learning-based deepDMD algorithm.
arXiv Detail & Related papers (2022-12-14T17:40:00Z) - Learning Invariant Subspaces of Koopman Operators--Part 1: A Methodology
for Demonstrating a Dictionary's Approximate Subspace Invariance [0.0]
In a widely used algorithm, Extended Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed class of functions.
Deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD)
In this paper we analyze the learned dictionaries from deepDMD and explore the theoretical basis for their strong performance.
arXiv Detail & Related papers (2022-12-14T17:33:52Z) - Temporal Forward-Backward Consistency, Not Residual Error, Measures the
Prediction Accuracy of Extended Dynamic Mode Decomposition [0.0]
Extended Dynamic Mode Decomposition (EDMD) is a method to approximate the action of the Koopman operator on a linear function space spanned by a dictionary of functions.
We introduce the novel concept of consistency index.
We show that this measure, based on using EDMD forward and backward in time, enjoys a number of desirable qualities.
arXiv Detail & Related papers (2022-07-15T19:22:22Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Heterogeneous mixtures of dictionary functions to approximate subspace
invariance in Koopman operators [0.0]
Deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD)
We discover a novel class of dictionary functions to approximate Koopman observables.
Our results provide a hypothesis to explain the success of deep neural networks in learning numerical approximations to Koopman operators.
arXiv Detail & Related papers (2022-06-27T19:04:03Z) - Deep Identification of Nonlinear Systems in Koopman Form [0.0]
The present paper treats the identification of nonlinear dynamical systems using Koopman-based deep state-space encoders.
An input-affine formulation is considered for the lifted model structure and we address both full and partial state availability.
arXiv Detail & Related papers (2021-10-06T08:50:56Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.