Dynamic Regret of Convex and Smooth Functions
- URL: http://arxiv.org/abs/2007.03479v2
- Date: Sat, 28 Nov 2020 08:52:02 GMT
- Title: Dynamic Regret of Convex and Smooth Functions
- Authors: Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou
- Abstract summary: We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
- Score: 93.71361250701075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate online convex optimization in non-stationary environments and
choose the dynamic regret as the performance measure, defined as the difference
between cumulative loss incurred by the online algorithm and that of any
feasible comparator sequence. Let $T$ be the time horizon and $P_T$ be the
path-length that essentially reflects the non-stationarity of environments, the
state-of-the-art dynamic regret is $\mathcal{O}(\sqrt{T(1+P_T)})$. Although
this bound is proved to be minimax optimal for convex functions, in this paper,
we demonstrate that it is possible to further enhance the dynamic regret by
exploiting the smoothness condition. Specifically, we propose novel online
algorithms that are capable of leveraging smoothness and replace the dependence
on $T$ in the dynamic regret by problem-dependent quantities: the variation in
gradients of loss functions, the cumulative loss of the comparator sequence,
and the minimum of the previous two terms. These quantities are at most
$\mathcal{O}(T)$ while could be much smaller in benign environments. Therefore,
our results are adaptive to the intrinsic difficulty of the problem, since the
bounds are tighter than existing results for easy problems and meanwhile
guarantee the same rate in the worst case.
Related papers
- An Equivalence Between Static and Dynamic Regret Minimization [10.812831455376218]
We show that for linear losses, dynamic regret minimization is equivalent to static regret minimization in an extended decision space.
We provide an algorithm guaranteeing dynamic regret of the form $R_T(u_1,dots,u_T)le tilde.
arXiv Detail & Related papers (2024-06-03T17:54:58Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Improved Dynamic Regret for Online Frank-Wolfe [54.690867216880356]
We investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization.
In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization.
arXiv Detail & Related papers (2023-02-11T07:19:51Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Dynamic Regret of Online Mirror Descent for Relatively Smooth Convex
Cost Functions [30.412826613009518]
We show that it is possible to bound the dynamic regret, even when neither Lipschitz continuity nor uniform smoothness is present.
We then show that with an additional condition of relatively strong convexity, the dynamic regret can be bounded by the path length and gradient variation.
arXiv Detail & Related papers (2022-02-25T17:35:07Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Dynamic Regret for Strongly Adaptive Methods and Optimality of Online
KRR [13.165557713537389]
We show that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret.
We derive a new lower bound on a certain penalized regret which establishes the near minimax optimality of online Kernel Ridge Regression (KRR)
arXiv Detail & Related papers (2021-11-22T21:52:47Z) - A closer look at temporal variability in dynamic online learning [19.468067110814808]
This work focuses on the setting of dynamic regret in the context of online learning with full information.
By assuming that the sequence of loss functions does not vary much with time, we show that it is possible to incur improved regret bounds compared to existing results.
arXiv Detail & Related papers (2021-02-15T16:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.