Koopman operator learning using invertible neural networks
- URL: http://arxiv.org/abs/2306.17396v2
- Date: Tue, 23 Jan 2024 05:03:55 GMT
- Title: Koopman operator learning using invertible neural networks
- Authors: Yuhuang Meng, Jianguo Huang, Yue Qiu
- Abstract summary: In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions.
Current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results.
We propose FlowDMD, aka Flow-based Dynamic Mode Decomposition, that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework.
- Score: 0.6846628460229516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Koopman operator theory, a finite-dimensional nonlinear system is
transformed into an infinite but linear system using a set of observable
functions. However, manually selecting observable functions that span the
invariant subspace of the Koopman operator based on prior knowledge is
inefficient and challenging, particularly when little or no information is
available about the underlying systems. Furthermore, current methodologies tend
to disregard the importance of the invertibility of observable functions, which
leads to inaccurate results. To address these challenges, we propose the
so-called FlowDMD, aka Flow-based Dynamic Mode Decomposition, that utilizes the
Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages
the intrinsically invertible characteristics of the CF-INN to learn the
invariant subspaces of the Koopman operator and accurately reconstruct state
variables. Numerical experiments demonstrate the superior performance of our
algorithm compared to state-of-the-art methodologies.
Related papers
- Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Extraction of nonlinearity in neural networks with Koopman operator [0.0]
We investigate the degree to which the nonlinearity of the neural network is essential.
We employ the Koopman operator, extended dynamic mode decomposition, and the tensor-train format.
arXiv Detail & Related papers (2024-02-18T23:54:35Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - KoopmanizingFlows: Diffeomorphically Learning Stable Koopman Operators [7.447933533434023]
We propose a novel framework for constructing linear time-invariant (LTI) models for a class of stable nonlinear dynamics.
We learn the Koopman operator features without assuming a predefined library of functions or knowing the spectrum.
We demonstrate the superior efficacy of the proposed method in comparison to a state-of-the-art method on the well-known LASA handwriting dataset.
arXiv Detail & Related papers (2021-12-08T02:40:40Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Deep Learning Enhanced Dynamic Mode Decomposition [0.0]
We use convolutional autoencoder networks to simultaneously find optimal families of observables.
We also generate both accurate embeddings of the flow into a space of observables and immersions of the observables back into flow coordinates.
This network results in a global transformation of the flow and affords future state prediction via EDMD and the decoder network.
arXiv Detail & Related papers (2021-08-10T03:54:23Z) - Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network [5.676923179244324]
It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
arXiv Detail & Related papers (2021-07-19T02:54:37Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.