Deep Learning of Koopman Representation for Control
- URL: http://arxiv.org/abs/2010.07546v1
- Date: Thu, 15 Oct 2020 06:41:24 GMT
- Title: Deep Learning of Koopman Representation for Control
- Authors: Yiqiang Han, Wenjian Hao, Umesh Vaidya
- Abstract summary: The proposed approach relies on the Deep Neural Network based learning of Koopman operator for the purpose of control.
The controller is purely data-driven and does not rely on a priori domain knowledge.
The method is applied to two classic dynamical systems on OpenAI Gym environment to demonstrate the capability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a data-driven, model-free approach for the optimal control of the
dynamical system. The proposed approach relies on the Deep Neural Network (DNN)
based learning of Koopman operator for the purpose of control. In particular,
DNN is employed for the data-driven identification of basis function used in
the linear lifting of nonlinear control system dynamics. The controller
synthesis is purely data-driven and does not rely on a priori domain knowledge.
The OpenAI Gym environment, employed for Reinforcement Learning-based control
design, is used for data generation and learning of Koopman operator in control
setting. The method is applied to two classic dynamical systems on OpenAI Gym
environment to demonstrate the capability.
Related papers
- Dropout MPC: An Ensemble Neural MPC Approach for Systems with Learned Dynamics [0.0]
We propose a novel sampling-based ensemble neural MPC algorithm that employs the Monte-Carlo dropout technique on the learned system model.
The method aims in general at uncertain systems with complex dynamics, where models derived from first principles are hard to infer.
arXiv Detail & Related papers (2024-06-04T17:15:25Z) - Neural Control: Concurrent System Identification and Control Learning with Neural ODE [13.727727205587804]
We propose a neural ODE based method for controlling unknown dynamical systems, denoted as Neural Control (NC)
Our model concurrently learns system dynamics as well as optimal controls that guides towards target states.
Our experiments demonstrate the effectiveness of our model for learning optimal control of unknown dynamical systems.
arXiv Detail & Related papers (2024-01-03T17:05:17Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Data-driven End-to-end Learning of Pole Placement Control for Nonlinear
Dynamics via Koopman Invariant Subspaces [37.795752939016225]
We propose a data-driven method for controlling black-box nonlinear dynamical systems based on the Koopman operator theory.
A policy network is trained such that the eigenvalues of a Koopman operator of controlled dynamics are close to the target eigenvalues.
We demonstrate that the proposed method achieves better performance than model-free reinforcement learning and model-based control with system identification.
arXiv Detail & Related papers (2022-08-16T05:57:28Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Stochastic Deep Model Reference Adaptive Control [9.594432031144715]
We present a Deep Neural Network-based Model Reference Adaptive Control.
Deep Model Reference Adaptive Control uses a Lyapunov-based method to adapt the output-layer weights of the DNN model in real-time.
A data-driven supervised learning algorithm is used to update the inner-layers parameters.
arXiv Detail & Related papers (2021-08-04T14:05:09Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z) - Data Driven Control with Learned Dynamics: Model-Based versus Model-Free
Approach [0.0]
We compare two types of data-driven control methods, representing model-based and model-free approaches.
One is a recently proposed method - Deep Koopman Representation for Control (DKRC), which utilizes a deep neural network to map an unknown nonlinear dynamical system to a high-dimensional linear system.
The other is a classic model-free control method based on an actor-critic architecture - Deep Deterministic Policy Gradient (DDPG), which has been proved to be effective in various dynamical systems.
arXiv Detail & Related papers (2020-06-16T22:18:21Z) - Data-driven Koopman Operators for Model-based Shared Control of
Human-Machine Systems [66.65503164312705]
We present a data-driven shared control algorithm that can be used to improve a human operator's control of complex machines.
Both the dynamics and information about the user's interaction are learned from observation through the use of a Koopman operator.
We find that model-based shared control significantly improves task and control metrics when compared to a natural learning, or user only, control paradigm.
arXiv Detail & Related papers (2020-06-12T14:14:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.