Adaptive Online Non-stochastic Control
- URL: http://arxiv.org/abs/2310.02261v3
- Date: Tue, 23 Apr 2024 07:33:27 GMT
- Title: Adaptive Online Non-stochastic Control
- Authors: Naram Mhaisen, George Iosifidis,
- Abstract summary: We tackle the problem of Non-stochastic Control (NSC) with the aim of obtaining algorithms whose policy regret is proportional to the difficulty of the controlled environment.
We tailor the Follow The Regularized Leader (FTRL) framework to dynamical systems by using regularizers that are proportional to the actual witnessed costs.
- Score: 10.25772015681554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle the problem of Non-stochastic Control (NSC) with the aim of obtaining algorithms whose policy regret is proportional to the difficulty of the controlled environment. Namely, we tailor the Follow The Regularized Leader (FTRL) framework to dynamical systems by using regularizers that are proportional to the actual witnessed costs. The main challenge arises from using the proposed adaptive regularizers in the presence of a state, or equivalently, a memory, which couples the effect of the online decisions and requires new tools for bounding the regret. Via new analysis techniques for NSC and FTRL integration, we obtain novel disturbance action controllers (DAC) with sub-linear data adaptive policy regret bounds that shrink when the trajectory of costs has small gradients, while staying sub-linear even in the worst case.
Related papers
- Neural Port-Hamiltonian Models for Nonlinear Distributed Control: An Unconstrained Parametrization Approach [0.0]
Neural Networks (NNs) can be leveraged to parametrize control policies that yield good performance.
NNs' sensitivity to small input changes poses a risk of destabilizing the closed-loop system.
To address these problems, we leverage the framework of port-Hamiltonian systems to design continuous-time distributed control policies.
The effectiveness of the proposed distributed controllers is demonstrated through consensus control of non-holonomic mobile robots.
arXiv Detail & Related papers (2024-11-15T10:44:29Z) - Regret Analysis of Policy Optimization over Submanifolds for Linearly
Constrained Online LQG [12.201535821920624]
We study online linear quadratic Gaussian problems with a given linear constraint imposed on the controller.
We propose online optimistic Newton on manifold (OONM) which provides an online controller based on the prediction on the first and second order information of the function sequence.
arXiv Detail & Related papers (2024-03-13T14:06:18Z) - Online Nonstochastic Control with Adversarial and Static Constraints [12.2632894803286]
We propose online nonstochastic control algorithms that achieve both sublinear regret and sublinear adversarial constraint violation.
Our algorithms are adaptive to adversarial constraints and achieve smaller cumulative costs and violations.
arXiv Detail & Related papers (2023-02-05T16:46:12Z) - Best of Both Worlds in Online Control: Competitive Ratio and Policy
Regret [61.59646565655169]
We show that several recently proposed online control algorithms achieve the best of both worlds: sublinear regret vs. the best DAC policy selected in hindsight.
We conclude that sublinear regret vs. the optimal competitive policy is attainable when the linear dynamical system is unknown.
arXiv Detail & Related papers (2022-11-21T07:29:08Z) - Regret-optimal Estimation and Control [52.28457815067461]
We show that the regret-optimal estimator and regret-optimal controller can be derived in state-space form.
We propose regret-optimal analogs of Model-Predictive Control (MPC) and the Extended KalmanFilter (EKF) for systems with nonlinear dynamics.
arXiv Detail & Related papers (2021-06-22T23:14:21Z) - Non-stationary Online Learning with Memory and Non-stochastic Control [71.14503310914799]
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions.
In this paper, we introduce dynamic policy regret as the performance measure to design algorithms robust to non-stationary environments.
We propose a novel algorithm for OCO with memory that provably enjoys an optimal dynamic policy regret in terms of time horizon, non-stationarity measure, and memory length.
arXiv Detail & Related papers (2021-02-07T09:45:15Z) - Adaptive Regret for Control of Time-Varying Dynamics [31.319502238224334]
We introduce the metric of it adaptive regret to the field of control.
Our main contribution is a novel efficient meta-algorithm.
The main technical innovation is the first adaptive regret bound for the more general framework of online convex optimization with memory.
arXiv Detail & Related papers (2020-07-08T19:40:34Z) - Logarithmic Regret Bound in Partially Observable Linear Dynamical
Systems [91.43582419264763]
We study the problem of system identification and adaptive control in partially observable linear dynamical systems.
We present the first model estimation method with finite-time guarantees in both open and closed-loop system identification.
We show that AdaptOn is the first algorithm that achieves $textpolylogleft(Tright)$ regret in adaptive control of unknown partially observable linear dynamical systems.
arXiv Detail & Related papers (2020-03-25T06:00:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.