CKNet: A Convolutional Neural Network Based on Koopman Operator for
Modeling Latent Dynamics from Pixels
- URL: http://arxiv.org/abs/2102.10205v1
- Date: Fri, 19 Feb 2021 23:29:08 GMT
- Title: CKNet: A Convolutional Neural Network Based on Koopman Operator for
Modeling Latent Dynamics from Pixels
- Authors: Yongqian Xiao, Xin Xu, QianLi Lin
- Abstract summary: We present a convolutional neural network (CNN) based on the Koopman operator (CKNet) to identify the latent dynamics from raw pixels.
Experiments show that identified dynamics with 32-dim can predict validly 120 steps and generate clear images.
- Score: 5.286010070038216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For systems with only known pixels, it is difficult to identify its dynamics,
especially with a linear operator. In this work, we present a convolutional
neural network (CNN) based on the Koopman operator (CKNet) to identify the
latent dynamics from raw pixels. CKNet learned an encoder and decoder to play
the role of the Koopman eigenfunctions and modes, respectively. The Koopman
eigenvalues can be approximated by the eigenvalues of the learned system
matrix. We present the deterministic and variational approaches to realize the
encoder separately. Because CKNet is trained under the constraints of the
Koopman theory, the identified dynamics is linear, controllable and
physically-interpretable. Besides, the system matrix and control matrix are
trained as trainable tensors. To improve the performance, we propose the
auxiliary weight term for multi-step linearity and prediction losses.
Experiments select two classic forced dynamical systems with continuous action
space, and the results show that identified dynamics with 32-dim can predict
validly 120 steps and generate clear images.
Related papers
- Koopman-Assisted Reinforcement Learning [8.812992091278668]
The Bellman equation and its continuous form, the Hamilton-Jacobi-Bellman (HJB) equation, are ubiquitous in reinforcement learning (RL) and control theory.
This paper explores the connection between the data-driven Koopman operator and Decision Processes (MDPs)
We develop two new RL algorithms to address these limitations.
arXiv Detail & Related papers (2024-03-04T18:19:48Z) - Extraction of nonlinearity in neural networks with Koopman operator [0.0]
We investigate the degree to which the nonlinearity of the neural network is essential.
We employ the Koopman operator, extended dynamic mode decomposition, and the tensor-train format.
arXiv Detail & Related papers (2024-02-18T23:54:35Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Learning dynamical systems: an example from open quantum system dynamics [0.0]
We will study the dynamics of a small spin chain coupled with dephasing gates.
We show how Koopman operator learning is an approach to efficiently learn not only the evolution of the density matrix, but also of every physical observable associated to the system.
arXiv Detail & Related papers (2022-11-12T14:36:13Z) - Data-driven End-to-end Learning of Pole Placement Control for Nonlinear
Dynamics via Koopman Invariant Subspaces [37.795752939016225]
We propose a data-driven method for controlling black-box nonlinear dynamical systems based on the Koopman operator theory.
A policy network is trained such that the eigenvalues of a Koopman operator of controlled dynamics are close to the target eigenvalues.
We demonstrate that the proposed method achieves better performance than model-free reinforcement learning and model-based control with system identification.
arXiv Detail & Related papers (2022-08-16T05:57:28Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Optimising for Interpretability: Convolutional Dynamic Alignment
Networks [108.83345790813445]
We introduce a new family of neural network models called Convolutional Dynamic Alignment Networks (CoDA Nets)
Their core building blocks are Dynamic Alignment Units (DAUs), which are optimised to transform their inputs with dynamically computed weight vectors that align with task-relevant patterns.
CoDA Nets model the classification prediction through a series of input-dependent linear transformations, allowing for linear decomposition of the output into individual input contributions.
arXiv Detail & Related papers (2021-09-27T12:39:46Z) - Extraction of Discrete Spectra Modes from Video Data Using a Deep
Convolutional Koopman Network [0.0]
Recent deep learning extensions in Koopman theory have enabled compact, interpretable representations of nonlinear dynamical systems.
Deep Koopman networks attempt to learn the Koopman eigenfunctions which capture the coordinate transformation to globally linearize system dynamics.
We demonstrate the ability of a deep convolutional Koopman network (CKN) in automatically identifying independent modes for dynamical systems with discrete spectra.
arXiv Detail & Related papers (2020-10-19T06:26:29Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.