Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously
- URL: http://arxiv.org/abs/2405.20824v1
- Date: Fri, 31 May 2024 14:16:52 GMT
- Title: Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously
- Authors: Stephen Pasteris, Chris Hicks, Vasilios Mavroudis, Mark Herbster,
- Abstract summary: A switching regret is defined relative to any segmentation of the trial sequence, and is equal to the sum of the static regrets of each segment.
Our algorithm for doing so is very efficient: having a space and per-trial time complexity that is logarithmic in the time-horizon.
- Score: 8.850922234275636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the classic problem of online convex optimisation. Whereas the notion of static regret is relevant for stationary problems, the notion of switching regret is more appropriate for non-stationary problems. A switching regret is defined relative to any segmentation of the trial sequence, and is equal to the sum of the static regrets of each segment. In this paper we show that, perhaps surprisingly, we can achieve the asymptotically optimal switching regret on every possible segmentation simultaneously. Our algorithm for doing so is very efficient: having a space and per-trial time complexity that is logarithmic in the time-horizon. Our algorithm also obtains novel bounds on its dynamic regret: being adaptive to variations in the rate of change of the comparator sequence.
Related papers
- Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Efficient Adaptive Regret Minimization [35.121567896321885]
In online convex optimization the player aims to minimize her regret against a fixed comparator over the entire repeated game.
Existing adaptive regret algorithms suffer from a computational penalty - typically on the order of a multiplicative factor that grows logarithmically in the number of game iterations.
In this paper we show how to reduce this computational penalty to be doubly logarithmic in the number of game iterations, and with minimal degradation to the optimal attainable adaptive regret bounds.
arXiv Detail & Related papers (2022-07-01T19:43:11Z) - No-Regret Learning in Games with Noisy Feedback: Faster Rates and
Adaptivity via Learning Rate Separation [76.61911795703062]
We study the problem of regret when the learner is involved in a continuous game with other optimizing agents.
In this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments.
We propose a fully adaptive method that smoothly interpolates between worst- and best-case regret guarantees.
arXiv Detail & Related papers (2022-06-13T10:13:51Z) - Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor [68.17855675511602]
We investigate an online prediction strategy named as Discounted-Normal-Predictor (Kapralov and Panigrahy, 2010) for smoothed online convex optimization (SOCO)
We show that the proposed algorithm can minimize the adaptive regret with switching cost in every interval.
arXiv Detail & Related papers (2022-05-02T08:48:22Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Continuation Path with Linear Convergence Rate [18.405645120971496]
Path-following algorithms are frequently used in composite optimization problems where a series of subproblems are solved sequentially.
We present a primal dual analysis of the path-following algorithm as well as determining how accurately each subproblem should be solved to guarantee a linear convergence rate on a target problem.
Considering optimization with a sparsity-inducing penalty, we analyze the change of the active sets with respect to the regularization parameter.
arXiv Detail & Related papers (2021-12-09T18:42:13Z) - Near-Linear Time Algorithm with Near-Logarithmic Regret Per Switch for
Mixable/Exp-Concave Losses [0.0]
We study the online optimization of mixable loss functions with logarithmic static regret in a dynamic environment.
For the first time in literature, we show that it is also possible to achieve near-logarithmic regret per switch with sub-polynomial complexity per time.
arXiv Detail & Related papers (2021-09-28T15:06:31Z) - Optimistic and Adaptive Lagrangian Hedging [11.698607733213226]
In online learning an algorithm plays against an environment with losses possibly picked by an adversary at each round.
We introduce optimism and adaptive stepsizes to Lagrangian hedging, a class of online algorithms that includes regret-matching, and hedge.
arXiv Detail & Related papers (2021-01-23T23:32:40Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z) - Minimizing Dynamic Regret and Adaptive Regret Simultaneously [60.17824125301273]
We propose novel online algorithms that are able to minimize the dynamic regret and adaptive regret simultaneously.
Our theoretical guarantee is even stronger in the sense that one algorithm is able to minimize the dynamic regret over any interval.
arXiv Detail & Related papers (2020-02-06T03:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.