Dual Adaptivity: Universal Algorithms for Minimizing the Adaptive Regret of Convex Functions
- URL: http://arxiv.org/abs/2508.00392v1
- Date: Fri, 01 Aug 2025 07:42:38 GMT
- Title: Dual Adaptivity: Universal Algorithms for Minimizing the Adaptive Regret of Convex Functions
- Authors: Lijun Zhang, Wenhao Yang, Guanghui Wang, Wei Jiang, Zhi-Hua Zhou,
- Abstract summary: A new performance measure -- adaptive regret -- was proposed in online learning.<n>Several algorithms have been successfully developed to minimize the adaptive regret.<n>Existing algorithms lack universality in the sense that they can only handle one type of convex function.
- Score: 61.393184339405465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To deal with changing environments, a new performance measure -- adaptive regret, defined as the maximum static regret over any interval, was proposed in online learning. Under the setting of online convex optimization, several algorithms have been successfully developed to minimize the adaptive regret. However, existing algorithms lack universality in the sense that they can only handle one type of convex functions and need apriori knowledge of parameters, which hinders their application in real-world scenarios. To address this limitation, this paper investigates universal algorithms with dual adaptivity, which automatically adapt to the property of functions (convex, exponentially concave, or strongly convex), as well as the nature of environments (stationary or changing). Specifically, we propose a meta-expert framework for dual adaptive algorithms, where multiple experts are created dynamically and aggregated by a meta-algorithm. The meta-algorithm is required to yield a second-order bound, which can accommodate unknown function types. We further incorporate the technique of sleeping experts to capture the changing environments. For the construction of experts, we introduce two strategies (increasing the number of experts or enhancing the capabilities of experts) to achieve universality. Theoretical analysis shows that our algorithms are able to minimize the adaptive regret for multiple types of convex functions simultaneously, and also allow the type of functions to switch between rounds. Moreover, we extend our meta-expert framework to online composite optimization, and develop a universal algorithm for minimizing the adaptive regret of composite functions.
Related papers
- Universal Online Convex Optimization with $1$ Projection per Round [31.16522982351235]
We develop universal algorithms that simultaneously attain minimax rates for multiple types of convex functions.
We employ a surrogate loss defined over simpler domains to develop universal OCO algorithms that only require $1$ projection.
Our analysis sheds new light on the surrogate loss, facilitating rigorous examination of the discrepancy between the regret of the original loss and that of the surrogate loss.
arXiv Detail & Related papers (2024-05-30T05:29:40Z) - Universal Online Optimization in Dynamic Environments via Uniclass
Prediction [0.0]
We propose a novel and intuitive framework for universal online optimization in dynamic environments.
Our strategy does not rely on the construction of a set of experts and an accompanying meta-algorithm.
This is the first paper proposing a universal approach with state-of-the-art dynamic regret guarantees even for general convex cost functions.
arXiv Detail & Related papers (2023-02-13T03:00:45Z) - Accelerated First-Order Optimization under Nonlinear Constraints [61.98523595657983]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.<n>An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - A Constrained Optimization Approach to Bilevel Optimization with
Multiple Inner Minima [49.320758794766185]
We propose a new approach, which convert the bilevel problem to an equivalent constrained optimization, and then the primal-dual algorithm can be used to solve the problem.
Such an approach enjoys a few advantages including (a) addresses the multiple inner minima challenge; (b) fully first-order efficiency without Jacobian computations.
arXiv Detail & Related papers (2022-03-01T18:20:01Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Universal Online Convex Optimization Meets Second-order Bounds [74.0120666722487]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses.
In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds.
arXiv Detail & Related papers (2021-05-08T11:43:49Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.