Imposing Robust Structured Control Constraint on Reinforcement Learning
of Linear Quadratic Regulator
- URL: http://arxiv.org/abs/2011.07011v2
- Date: Fri, 19 Feb 2021 19:28:13 GMT
- Title: Imposing Robust Structured Control Constraint on Reinforcement Learning
of Linear Quadratic Regulator
- Authors: Sayak Mukherjee, Thanh Long Vu
- Abstract summary: This paper presents a design for any generic structure, paving the way for distributed learning control.
The ideas from reinforcement learning (RL) in conjunction with control-theoretic sufficient stability and performance guarantees are used to develop the methodology.
We validate our theoretical results with a simulation on a multi-agent network with 6 agents.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper discusses learning a structured feedback control to obtain
sufficient robustness to exogenous inputs for linear dynamic systems with
unknown state matrix. The structural constraint on the controller is necessary
for many cyber-physical systems, and our approach presents a design for any
generic structure, paving the way for distributed learning control. The ideas
from reinforcement learning (RL) in conjunction with control-theoretic
sufficient stability and performance guarantees are used to develop the
methodology. First, a model-based framework is formulated using dynamic
programming to embed the structural constraint in the linear quadratic
regulator (LQR) setting along with sufficient robustness conditions.
Thereafter, we translate these conditions to a data-driven learning-based
framework - robust structured reinforcement learning (RSRL) that enjoys the
control-theoretic guarantees on stability and convergence. We validate our
theoretical results with a simulation on a multi-agent network with 6 agents.
Related papers
- Neural Internal Model Control: Learning a Robust Control Policy via Predictive Error Feedback [16.46487826869775]
We propose a novel framework, Neural Internal Model Control, which integrates model-based control with RL-based control to enhance robustness.
Our framework streamlines the predictive model by applying Newton-Euler equations for rigid-body dynamics, eliminating the need to capture complex high-dimensional nonlinearities.
We demonstrate the effectiveness of our framework on both quadrotors and quadrupedal robots, achieving superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-11-20T07:07:42Z) - Learning Exactly Linearizable Deep Dynamics Models [0.07366405857677226]
We propose a learning method for exactly linearizable dynamical models that can easily apply various control theories to ensure stability, reliability, etc.
The proposed model is employed for the real-time control of an automotive engine, and the results demonstrate good predictive performance and stable control under constraints.
arXiv Detail & Related papers (2023-11-30T05:40:55Z) - Stable Modular Control via Contraction Theory for Reinforcement Learning [8.742125999252366]
We propose a novel way to integrate control techniques with reinforcement learning (RL) for stability, robustness, and generalization.
We realize such modularity via signal composition and dynamic decomposition.
arXiv Detail & Related papers (2023-11-07T02:41:02Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - RL-Controller: a reinforcement learning framework for active structural
control [0.0]
We present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment.
We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts.
In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies.
arXiv Detail & Related papers (2021-03-13T04:42:13Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Reinforcement Learning of Structured Control for Linear Systems with
Unknown State Matrix [0.0]
We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees.
A special control structure enabled by this RL framework is distributed learning control which is necessary for many large-scale cyber-physical systems.
arXiv Detail & Related papers (2020-11-02T17:04:34Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.