Making Non-Stochastic Control (Almost) as Easy as Stochastic
- URL: http://arxiv.org/abs/2006.05910v2
- Date: Mon, 5 Oct 2020 03:03:16 GMT
- Title: Making Non-Stochastic Control (Almost) as Easy as Stochastic
- Authors: Max Simchowitz
- Abstract summary: We show that same regret rate is attainable even in considerably more general non-stochastic control model.
We attain the optimal $widetildemathcalO(sqrtT)$ regret when the dynamics are unknown to the learner.
- Score: 27.736345095024276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent literature has made much progress in understanding \emph{online LQR}:
a modern learning-theoretic take on the classical control problem in which a
learner attempts to optimally control an unknown linear dynamical system with
fully observed state, perturbed by i.i.d. Gaussian noise. It is now understood
that the optimal regret on time horizon $T$ against the optimal control law
scales as $\widetilde{\Theta}(\sqrt{T})$. In this paper, we show that the same
regret rate (against a suitable benchmark) is attainable even in the
considerably more general non-stochastic control model, where the system is
driven by \emph{arbitrary adversarial} noise (Agarwal et al. 2019). In other
words, \emph{stochasticity confers little benefit in online LQR}.
We attain the optimal $\widetilde{\mathcal{O}}(\sqrt{T})$ regret when the
dynamics are unknown to the learner, and $\mathrm{poly}(\log T)$ regret when
known, provided that the cost functions are strongly convex (as in LQR). Our
algorithm is based on a novel variant of online Newton step (Hazan et al.
2007), which adapts to the geometry induced by possibly adversarial
disturbances, and our analysis hinges on generic "policy regret" bounds for
certain structured losses in the OCO-with-memory framework (Anava et al. 2015).
Moreover, our results accomodate the full generality of the non-stochastic
control setting: adversarially chosen (possibly non-quadratic) costs, partial
state observation, and fully adversarial process and observation noise.
Related papers
- The Best of Both Worlds: Reinforcement Learning with Logarithmic Regret
and Policy Switches [84.54669549718075]
We study the problem of regret minimization for episodic Reinforcement Learning (RL)
We focus on learning with general function classes and general model classes.
We show that a logarithmic regret bound is realizable by algorithms with $O(log T)$ switching cost.
arXiv Detail & Related papers (2022-03-03T02:55:55Z) - Dynamic Regret for Strongly Adaptive Methods and Optimality of Online
KRR [13.165557713537389]
We show that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret.
We derive a new lower bound on a certain penalized regret which establishes the near minimax optimality of online Kernel Ridge Regression (KRR)
arXiv Detail & Related papers (2021-11-22T21:52:47Z) - Near-Optimal No-Regret Learning for Correlated Equilibria in
Multi-Player General-Sum Games [104.74734408204749]
We show that if all agents in a multi-player general-sum normal-form game employ Optimistic Multiplicative Weights Update (OMWU), the external regret of every player is $O(textrmpolylog(T))$ after $T$ repetitions of the game.
We extend their result from external regret to internal regret and swap regret, thereby establishing uncoupled learning dynamics that converge to an approximate correlated equilibrium.
arXiv Detail & Related papers (2021-11-11T01:19:53Z) - Dynamic Regret Minimization for Control of Non-stationary Linear
Dynamical Systems [18.783925692307054]
We present an algorithm that achieves the optimal dynamic regret of $tildemathcalO(sqrtST)$ where $S$ is the number of switches.
The crux of our algorithm is an adaptive non-stationarity detection strategy, which builds on an approach recently developed for contextual Multi-armed Bandit problems.
arXiv Detail & Related papers (2021-11-06T01:30:51Z) - Finite-time System Identification and Adaptive Control in Autoregressive
Exogenous Systems [79.67879934935661]
We study the problem of system identification and adaptive control of unknown ARX systems.
We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection.
arXiv Detail & Related papers (2021-08-26T18:00:00Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Naive Exploration is Optimal for Online LQR [49.681825576239355]
We show that the optimal regret scales as $widetildeTheta(sqrtd_mathbfu2 d_mathbfx T)$, where $T$ is the number of time steps, $d_mathbfu$ is the dimension of the input space, and $d_mathbfx$ is the dimension of the system state.
Our lower bounds rule out the possibility of a $mathrmpoly(logT)$-regret algorithm, which had been
arXiv Detail & Related papers (2020-01-27T03:44:54Z) - Improper Learning for Non-Stochastic Control [78.65807250350755]
We consider the problem of controlling a possibly unknown linear dynamical system with adversarial perturbations, adversarially chosen convex loss functions, and partially observed states.
Applying online descent to this parametrization yields a new controller which attains sublinear regret vs. a large class of closed-loop policies.
Our bounds are the first in the non-stochastic control setting that compete with emphall stabilizing linear dynamical controllers.
arXiv Detail & Related papers (2020-01-25T02:12:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.