Learning Invariant Subspaces of Koopman Operators--Part 1: A Methodology
for Demonstrating a Dictionary's Approximate Subspace Invariance
- URL: http://arxiv.org/abs/2212.07358v1
- Date: Wed, 14 Dec 2022 17:33:52 GMT
- Title: Learning Invariant Subspaces of Koopman Operators--Part 1: A Methodology
for Demonstrating a Dictionary's Approximate Subspace Invariance
- Authors: Charles A. Johnson, Shara Balakrishnan and Enoch Yeung
- Abstract summary: In a widely used algorithm, Extended Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed class of functions.
Deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD)
In this paper we analyze the learned dictionaries from deepDMD and explore the theoretical basis for their strong performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Koopman operators model nonlinear dynamics as a linear dynamic system acting
on a nonlinear function as the state. This nonstandard state is often called a
Koopman observable and is usually approximated numerically by a superposition
of functions drawn from a dictionary. In a widely used algorithm, Extended
Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed
class of functions. Recently, deep learning combined with EDMD has been used to
learn novel dictionary functions in an algorithm called deep dynamic mode
decomposition (deepDMD). The learned representation both (1) accurately models
and (2) scales well with the dimension of the original nonlinear system. In
this paper we analyze the learned dictionaries from deepDMD and explore the
theoretical basis for their strong performance. We explore State-Inclusive
Logistic Lifting (SILL) dictionary functions to approximate Koopman
observables. Error analysis of these dictionary functions show they satisfy a
property of subspace approximation, which we define as uniform finite
approximate closure. Our results provide a hypothesis to explain the success of
deep neural networks in learning numerical approximations to Koopman operators.
Part 2 of this paper will extend this explanation by demonstrating the subspace
invariant of heterogeneous dictionaries and presenting a head-to-head numerical
comparison of deepDMD and low-parameter heterogeneous dictionary learning.
Related papers
- Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - Learning Invariant Subspaces of Koopman Operators--Part 2: Heterogeneous
Dictionary Mixing to Approximate Subspace Invariance [0.0]
This work builds on the models and concepts presented in part 1 to learn approximate dictionary representations of Koopman operators from data.
We show that structured mixing of heterogeneous dictionary functions achieve the same accuracy and dimensional scaling as the deep-learning-based deepDMD algorithm.
arXiv Detail & Related papers (2022-12-14T17:40:00Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Heterogeneous mixtures of dictionary functions to approximate subspace
invariance in Koopman operators [0.0]
Deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD)
We discover a novel class of dictionary functions to approximate Koopman observables.
Our results provide a hypothesis to explain the success of deep neural networks in learning numerical approximations to Koopman operators.
arXiv Detail & Related papers (2022-06-27T19:04:03Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Deep Identification of Nonlinear Systems in Koopman Form [0.0]
The present paper treats the identification of nonlinear dynamical systems using Koopman-based deep state-space encoders.
An input-affine formulation is considered for the lifted model structure and we address both full and partial state availability.
arXiv Detail & Related papers (2021-10-06T08:50:56Z) - Extended dynamic mode decomposition with dictionary learning using
neural ordinary differential equations [0.8701566919381223]
We propose an algorithm to perform extended dynamic mode decomposition using NODEs.
We show the superiority of the parameter efficiency of the proposed method through numerical experiments.
arXiv Detail & Related papers (2021-10-01T06:56:14Z) - Generalizing Dynamic Mode Decomposition: Balancing Accuracy and
Expressiveness in Koopman Approximations [0.0]
This paper tackles the data-driven approximation of unknown dynamical systems using Koopman-operator methods.
We propose the Tunable Symmetric Subspace Decomposition algorithm to refine the dictionary.
We provide a full characterization of the algorithm properties and show that it generalizes both Extended Dynamic Mode Decomposition and Symmetric Subspace Decomposition.
arXiv Detail & Related papers (2021-08-08T19:11:41Z) - Estimating Koopman operators for nonlinear dynamical systems: a
nonparametric approach [77.77696851397539]
The Koopman operator is a mathematical tool that allows for a linear description of non-linear systems.
In this paper we capture their core essence as a dual version of the same framework, incorporating them into the Kernel framework.
We establish a strong link between kernel methods and Koopman operators, leading to the estimation of the latter through Kernel functions.
arXiv Detail & Related papers (2021-03-25T11:08:26Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.