Thompson Sampling Achieves $\tilde O(\sqrt{T})$ Regret in Linear
Quadratic Control
- URL: http://arxiv.org/abs/2206.08520v1
- Date: Fri, 17 Jun 2022 02:47:53 GMT
- Title: Thompson Sampling Achieves $\tilde O(\sqrt{T})$ Regret in Linear
Quadratic Control
- Authors: Taylan Kargin, Sahin Lale, Kamyar Azizzadenesheli, Anima Anandkumar,
Babak Hassibi
- Abstract summary: We study the problem of adaptive control of stabilizable linear-quadratic regulators (LQRs) using Thompson Sampling (TS)
We propose an efficient TS algorithm for the adaptive control of LQRs, TSAC, that attains $tilde O(sqrtT)$ regret, even for multidimensional systems.
- Score: 85.22735611954694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thompson Sampling (TS) is an efficient method for decision-making under
uncertainty, where an action is sampled from a carefully prescribed
distribution which is updated based on the observed data. In this work, we
study the problem of adaptive control of stabilizable linear-quadratic
regulators (LQRs) using TS, where the system dynamics are unknown. Previous
works have established that $\tilde O(\sqrt{T})$ frequentist regret is optimal
for the adaptive control of LQRs. However, the existing methods either work
only in restrictive settings, require a priori known stabilizing controllers,
or utilize computationally intractable approaches. We propose an efficient TS
algorithm for the adaptive control of LQRs, TS-based Adaptive Control, TSAC,
that attains $\tilde O(\sqrt{T})$ regret, even for multidimensional systems,
thereby solving the open problem posed in Abeille and Lazaric (2018). TSAC does
not require a priori known stabilizing controller and achieves fast
stabilization of the underlying system by effectively exploring the environment
in the early stages. Our result hinges on developing a novel lower bound on the
probability that the TS provides an optimistic sample. By carefully prescribing
an early exploration strategy and a policy update rule, we show that TS
achieves order-optimal regret in adaptive control of multidimensional
stabilizable LQRs. We empirically demonstrate the performance and the
efficiency of TSAC in several adaptive control tasks.
Related papers
- Sub-linear Regret in Adaptive Model Predictive Control [56.705978425244496]
We present STT-MPC (Self-Tuning Tube-based Model Predictive Control), an online oracle that combines the certainty-equivalence principle and polytopic tubes.
We analyze the regret of the algorithm, when compared to an algorithm initially aware of the system dynamics.
arXiv Detail & Related papers (2023-10-07T15:07:10Z) - Finite Time Regret Bounds for Minimum Variance Control of Autoregressive
Systems with Exogenous Inputs [10.304902889192071]
A key challenge experienced by many adaptive controllers is their poor empirical performance in the initial stages of learning.
We present a modified version of the Certainty Equivalence (CE) adaptive controller, which utilizes probing inputs for exploration.
We show that it has a $C log T$ bound on the regret after $T$ time-steps for bounded noise, and $Clog2 T$ in the case of sub-Gaussian noise.
arXiv Detail & Related papers (2023-05-26T14:29:33Z) - Finite-time System Identification and Adaptive Control in Autoregressive
Exogenous Systems [79.67879934935661]
We study the problem of system identification and adaptive control of unknown ARX systems.
We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection.
arXiv Detail & Related papers (2021-08-26T18:00:00Z) - Regret Analysis of Learning-Based MPC with Partially-Unknown Cost
Function [5.601217969637838]
exploration/exploitation trade-off is an inherent challenge in data-driven and adaptive control.
We propose the use of a finitehorizon oracle controller with perfect knowledge of all system parameters as a reference for optimal control actions.
We develop learning-based policies that we prove achieve low regret with respect to this oracle finite-horizon controller.
arXiv Detail & Related papers (2021-08-04T22:43:51Z) - Regret-optimal Estimation and Control [52.28457815067461]
We show that the regret-optimal estimator and regret-optimal controller can be derived in state-space form.
We propose regret-optimal analogs of Model-Predictive Control (MPC) and the Extended KalmanFilter (EKF) for systems with nonlinear dynamics.
arXiv Detail & Related papers (2021-06-22T23:14:21Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.