Optimal Dynamic Regret in LQR Control
- URL: http://arxiv.org/abs/2206.09257v1
- Date: Sat, 18 Jun 2022 18:00:21 GMT
- Title: Optimal Dynamic Regret in LQR Control
- Authors: Dheeraj Baby and Yu-Xiang Wang
- Abstract summary: We consider the problem of nonstochastic control with a sequence of quadratic losses, i.e., LQR control.
We provide an online algorithm that achieves an optimal dynamic (policy) regret of $tildeO(textmaxn1/3 mathcalTV(M_1:n)2/3, 1)$.
- Score: 23.91519151164528
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the problem of nonstochastic control with a sequence of quadratic
losses, i.e., LQR control. We provide an efficient online algorithm that
achieves an optimal dynamic (policy) regret of $\tilde{O}(\text{max}\{n^{1/3}
\mathcal{TV}(M_{1:n})^{2/3}, 1\})$, where $\mathcal{TV}(M_{1:n})$ is the total
variation of any oracle sequence of Disturbance Action policies parameterized
by $M_1,...,M_n$ -- chosen in hindsight to cater to unknown nonstationarity.
The rate improves the best known rate of $\tilde{O}(\sqrt{n
(\mathcal{TV}(M_{1:n})+1)} )$ for general convex losses and we prove that it is
information-theoretically optimal for LQR. Main technical components include
the reduction of LQR to online linear regression with delayed feedback due to
Foster and Simchowitz (2020), as well as a new proper learning algorithm with
an optimal $\tilde{O}(n^{1/3})$ dynamic regret on a family of ``minibatched''
quadratic losses, which could be of independent interest.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Efficient Rate Optimal Regret for Adversarial Contextual MDPs Using
Online Function Approximation [47.18926328995424]
We present the OMG-CMDP! algorithm for regret minimization in adversarial Contextual MDPs.
Our algorithm is efficient (assuming efficient online regression oracles) and simple and robust to approximation errors.
arXiv Detail & Related papers (2023-03-02T18:27:00Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Dynamic Regret Minimization for Control of Non-stationary Linear
Dynamical Systems [18.783925692307054]
We present an algorithm that achieves the optimal dynamic regret of $tildemathcalO(sqrtST)$ where $S$ is the number of switches.
The crux of our algorithm is an adaptive non-stationarity detection strategy, which builds on an approach recently developed for contextual Multi-armed Bandit problems.
arXiv Detail & Related papers (2021-11-06T01:30:51Z) - Optimal Dynamic Regret in Exp-Concave Online Learning [28.62891856368132]
We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with exp-contrivial losses.
We show that whenever improper learning is allowed, a Strongly Adaptive online learner achieves the dynamic regret of $tilde O(d3.5n1/3C_n2/3 vee dlog n)$ where $C_n$ is the total variation (a.k.a. path length) of the an arbitrary sequence of comparators that may not be known to the learner ahead of time.
arXiv Detail & Related papers (2021-04-23T21:36:51Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov
Decision Processes [91.38793800392108]
We study reinforcement learning with linear function approximation where the underlying transition probability kernel of the Markov decision process (MDP) is a linear mixture model.
We propose a new, computationally efficient algorithm with linear function approximation named $textUCRL-VTR+$ for the aforementioned linear mixture MDPs.
To the best of our knowledge, these are the first computationally efficient, nearly minimax optimal algorithms for RL with linear function approximation.
arXiv Detail & Related papers (2020-12-15T18:56:46Z) - Efficient Optimistic Exploration in Linear-Quadratic Regulators via
Lagrangian Relaxation [107.06364966905821]
We study the exploration-exploitation dilemma in the linear quadratic regulator (LQR) setting.
Inspired by the extended value iteration algorithm used in optimistic algorithms for finite MDPs, we propose to relax the optimistic optimization of ofulq.
We show that an $epsilon$-optimistic controller can be computed efficiently by solving at most $Obig(log (1/epsilon)big)$ Riccati equations.
arXiv Detail & Related papers (2020-07-13T16:30:47Z) - Naive Exploration is Optimal for Online LQR [49.681825576239355]
We show that the optimal regret scales as $widetildeTheta(sqrtd_mathbfu2 d_mathbfx T)$, where $T$ is the number of time steps, $d_mathbfu$ is the dimension of the input space, and $d_mathbfx$ is the dimension of the system state.
Our lower bounds rule out the possibility of a $mathrmpoly(logT)$-regret algorithm, which had been
arXiv Detail & Related papers (2020-01-27T03:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.