Stable Online Control of Linear Time-Varying Systems
- URL: http://arxiv.org/abs/2104.14134v2
- Date: Fri, 30 Apr 2021 03:41:41 GMT
- Title: Stable Online Control of Linear Time-Varying Systems
- Authors: Guannan Qu, Yuanyuan Shi, Sahin Lale, Anima Anandkumar, Adam Wierman
- Abstract summary: COCO-LQ is an efficient online control algorithm that guarantees input-to-state stability for a large class of LTV systems.
We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
- Score: 49.41696101740271
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Linear time-varying (LTV) systems are widely used for modeling real-world
dynamical systems due to their generality and simplicity. Providing stability
guarantees for LTV systems is one of the central problems in control theory.
However, existing approaches that guarantee stability typically lead to
significantly sub-optimal cumulative control cost in online settings where only
current or short-term system information is available. In this work, we propose
an efficient online control algorithm, COvariance Constrained Online Linear
Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a
large class of LTV systems while also minimizing the control cost. The proposed
method incorporates a state covariance constraint into the semi-definite
programming (SDP) formulation of the LQ optimal controller. We empirically
demonstrate the performance of COCO-LQ in both synthetic experiments and a
power system frequency control example.
Related papers
- Neural Port-Hamiltonian Models for Nonlinear Distributed Control: An Unconstrained Parametrization Approach [0.0]
Neural Networks (NNs) can be leveraged to parametrize control policies that yield good performance.
NNs' sensitivity to small input changes poses a risk of destabilizing the closed-loop system.
To address these problems, we leverage the framework of port-Hamiltonian systems to design continuous-time distributed control policies.
The effectiveness of the proposed distributed controllers is demonstrated through consensus control of non-holonomic mobile robots.
arXiv Detail & Related papers (2024-11-15T10:44:29Z) - Resource Optimization for Tail-Based Control in Wireless Networked Control Systems [31.144888314890597]
Achieving control stability is one of the key design challenges of scalable Wireless Networked Control Systems.
This paper explores the use of an alternative control concept defined as tail-based control, which extends the classical Linear Quadratic Regulator (LQR) cost function for multiple dynamic control systems over a shared wireless network.
arXiv Detail & Related papers (2024-06-20T13:27:44Z) - Robust stabilization of polytopic systems via fast and reliable neural
network-based approximations [2.2299983745857896]
We consider the design of fast and reliable neural network (NN)-based approximations of traditional stabilizing controllers for linear systems with polytopic uncertainty.
We certify the closed-loop stability and performance of a linear uncertain system when a trained rectified linear unit (ReLU)-based approximation replaces such traditional controllers.
arXiv Detail & Related papers (2022-04-27T21:58:07Z) - Stabilizing Dynamical Systems via Policy Gradient Methods [32.88312419270879]
We provide a model-free algorithm for stabilizing fully observed dynamical systems.
We prove that this method efficiently recovers a stabilizing controller for linear systems.
We empirically evaluate the effectiveness of our approach on common control benchmarks.
arXiv Detail & Related papers (2021-10-13T00:58:57Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.