Optimal Algorithms for Online Convex Optimization with Adversarial Constraints
- URL: http://arxiv.org/abs/2310.18955v2
- Date: Fri, 24 May 2024 16:07:26 GMT
- Title: Optimal Algorithms for Online Convex Optimization with Adversarial Constraints
- Authors: Abhishek Sinha, Rahul Vaze,
- Abstract summary: In COCO, a convex cost function and a convex constraint function are revealed to the learner after the action for that round is chosen.
We show that an online policy can simultaneously achieve $O(sqrtT)$ regret and $O(sqrtT)$ CCV without any restrictive assumptions.
- Score: 16.99491218081617
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A well-studied generalization of the standard online convex optimization (OCO) is constrained online convex optimization (COCO). In COCO, on every round, a convex cost function and a convex constraint function are revealed to the learner after the action for that round is chosen. The objective is to design an online policy that simultaneously achieves a small regret while ensuring a small cumulative constraint violation (CCV) against an adaptive adversary interacting over a horizon of length $T$. A long-standing open question in COCO is whether an online policy can simultaneously achieve $O(\sqrt{T})$ regret and $O(\sqrt{T})$ CCV without any restrictive assumptions. For the first time, we answer this in the affirmative and show that an online policy can simultaneously achieve $O(\sqrt{T})$ regret and $\tilde{O}(\sqrt{T})$ CCV. Furthermore, in the case of strongly convex cost and convex constraint functions, the regret guarantee can be improved to $O(\log T)$ while keeping the CCV bound the same as above. We establish these results by effectively combining the adaptive regret bound of the AdaGrad algorithm with Lyapunov optimization - a classic tool from control theory. Surprisingly, the analysis is short and elegant.
Related papers
- Tight Bounds for Online Convex Optimization with Adversarial Constraints [16.99491218081617]
In COCO, a convex cost function and a convex constraint function are revealed to the learner after the action for that round is chosen.
We show that an online policy can simultaneously achieve $O(sqrtT)$ regret and $tildeO(sqrtT)$ CCV without any restrictive assumptions.
arXiv Detail & Related papers (2024-05-15T12:37:03Z) - Optimistic Safety for Online Convex Optimization with Unknown Linear Constraints [31.526232903811533]
We introduce an algorithm that we term Optimistically Safe OCO and show that it enjoys $tildemathcalO(sqrtT)$ regret and no constraint violation.
In the case of static linear constraints, this improves on the previous best known $tildemathcalO(T2/3)$ regret with only slightly stronger assumptions.
In the case of time-varying constraints, our work supplements existing results that show $mathcalO(sqrtT)$ regret and $mathcal
arXiv Detail & Related papers (2024-03-09T04:01:39Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Improved Dynamic Regret for Online Frank-Wolfe [54.690867216880356]
We investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization.
In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization.
arXiv Detail & Related papers (2023-02-11T07:19:51Z) - Exploiting the Curvature of Feasible Sets for Faster Projection-Free
Online Learning [8.461907111368628]
We develop new efficient projection-free algorithms for Online Convex Optimization (OCO)
We develop an OCO algorithm that makes two calls to an LO Oracle per round and achieves the near-optimal $widetildeO(sqrtT)$ regret.
We also present an algorithm for general convex sets that makes $widetilde O(d)$ expected number of calls to an LO Oracle per round.
arXiv Detail & Related papers (2022-05-23T17:13:46Z) - Towards Painless Policy Optimization for Constrained MDPs [46.12526917024248]
We study policy optimization in an infinite horizon, $gamma$-discounted constrained Markov decision process (CMDP)
Our objective is to return a policy that achieves large expected reward with a small constraint violation.
We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms.
arXiv Detail & Related papers (2022-04-11T15:08:09Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Revisiting Smoothed Online Learning [70.09792747315323]
We investigate the problem of smoothed online learning, in which the online learner suffers both a hitting cost and a switching cost.
To bound the competitive ratio, we assume the hitting cost is known to the learner in each round, and investigate the greedy algorithm which simply minimizes the weighted sum of the hitting cost and the switching cost.
arXiv Detail & Related papers (2021-02-13T14:15:55Z) - Lazy OCO: Online Convex Optimization on a Switching Budget [34.936641201844054]
We study a variant of online convex optimization where the player is permitted to switch decisions at most $S$ times in expectation throughout $T$ rounds.
Similar problems have been addressed in prior work for the discrete decision set setting, and more recently in the continuous setting but only with an adaptive adversary.
arXiv Detail & Related papers (2021-02-07T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.