Learning Exactly Linearizable Deep Dynamics Models
- URL: http://arxiv.org/abs/2311.18261v1
- Date: Thu, 30 Nov 2023 05:40:55 GMT
- Title: Learning Exactly Linearizable Deep Dynamics Models
- Authors: Ryuta Moriyasu, Masayuki Kusunoki, Kenji Kashima
- Abstract summary: We propose a learning method for exactly linearizable dynamical models that can easily apply various control theories to ensure stability, reliability, etc.
The proposed model is employed for the real-time control of an automotive engine, and the results demonstrate good predictive performance and stable control under constraints.
- Score: 0.07366405857677226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research on control using models based on machine-learning methods has now
shifted to the practical engineering stage. Achieving high performance and
theoretically guaranteeing the safety of the system is critical for such
applications. In this paper, we propose a learning method for exactly
linearizable dynamical models that can easily apply various control theories to
ensure stability, reliability, etc., and to provide a high degree of freedom of
expression. As an example, we present a design that combines simple linear
control and control barrier functions. The proposed model is employed for the
real-time control of an automotive engine, and the results demonstrate good
predictive performance and stable control under constraints.
Related papers
- Safe Deep Model-Based Reinforcement Learning with Lyapunov Functions [2.50194939587674]
We propose a new Model-based RL framework to enable efficient policy learning with unknown dynamics.
We introduce and explore a novel method for adding safety constraints for model-based RL during training and policy learning.
arXiv Detail & Related papers (2024-05-25T11:21:12Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control [46.81433026280051]
We present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems.
Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions.
arXiv Detail & Related papers (2022-10-23T00:45:05Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Bridging Model-based Safety and Model-free Reinforcement Learning
through System Identification of Low Dimensional Linear Models [16.511440197186918]
We propose a new method to combine model-based safety with model-free reinforcement learning.
We show that a low-dimensional dynamical model is sufficient to capture the dynamics of the closed-loop system.
We illustrate that the found linear model is able to provide guarantees by safety-critical optimal control framework.
arXiv Detail & Related papers (2022-05-11T22:03:18Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Structured Hammerstein-Wiener Model Learning for Model Predictive
Control [0.2752817022620644]
This paper aims to improve the reliability of optimal control using models constructed by machine learning methods.
In this paper, we propose a model that combines the Hammerstein-Wiener model with convex neural networks.
arXiv Detail & Related papers (2021-07-09T06:41:34Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Model-Reference Reinforcement Learning Control of Autonomous Surface
Vehicles with Uncertainties [1.7033108359337459]
The proposed control combines a conventional control method with deep reinforcement learning.
With the reinforcement learning, we can directly learn a control law to compensate for modeling uncertainties.
In comparison with traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency.
arXiv Detail & Related papers (2020-03-30T22:02:13Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.