Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps
- URL: http://arxiv.org/abs/2602.13177v1
- Date: Fri, 13 Feb 2026 18:37:26 GMT
- Title: Improved Regret Guarantees for Online Mirror Descent using a Portfolio of Mirror Maps
- Authors: Swati Gupta, Jai Moondra, Mohit Singh,
- Abstract summary: We show that it is possible to obtain intrinsic gains in regret by using mirror maps for geometries that interpolate between $L_p$ and $L_p$.<n>In particular, we construct a family of online convex optimization instances in $mathbbRd$, where block norm-based mirror maps achieve a provable (in $d$) improvement in regret over OEG and OPGD.
- Score: 3.3654644618480547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: OMD and its variants give a flexible framework for OCO where the performance depends crucially on the choice of the mirror map. While the geometries underlying OPGD and OEG, both special cases of OMD, are well understood, it remains a challenging open question on how to construct an optimal mirror map for any given constrained set and a general family of loss functions, e.g., sparse losses. Motivated by parameterizing a near-optimal set of mirror maps, we consider a simpler question: is it even possible to obtain polynomial gains in regret by using mirror maps for geometries that interpolate between $L_1$ and $L_2$, which may not be possible by restricting to only OEG ($L_1$) or OPGD ($L_2$). Our main result answers this question positively. We show that mirror maps based on block norms adapt better to the sparsity of loss functions, compared to previous $L_p$ (for $p \in [1, 2]$) interpolations. In particular, we construct a family of online convex optimization instances in $\mathbb{R}^d$, where block norm-based mirror maps achieve a provable polynomial (in $d$) improvement in regret over OEG and OPGD for sparse loss functions. We then turn to the setting in which the sparsity level of the loss functions is unknown. In this case, the choice of geometry itself becomes an online decision problem. We first show that naively switching between OEG and OPGD can incur linear regret, highlighting the intrinsic difficulty of geometry selection. To overcome this issue, we propose a meta-algorithm based on multiplicative weights that dynamically selects among a family of uniform block norms. We show that this approach effectively tunes OMD to the sparsity of the losses, yielding adaptive regret guarantees. Overall, our results demonstrate that online mirror-map selection can significantly enhance the ability of OMD to exploit sparsity in online convex optimization.
Related papers
- Adaptive Matrix Online Learning through Smoothing with Guarantees for Nonsmooth Nonconvex Optimization [54.723834588133165]
We study online linear optimization with matrix variables by the operatorAML, a setting where the geometry renders designing datadependent and efficient adaptive algorithms challenging.<n>We instantiate this framework with two efficient methods that avoid projections.<n>We show both methods admit closed-form updates match one-sided Shampoo's regret up to a constant factor, while significantly reducing computational cost.
arXiv Detail & Related papers (2026-02-09T03:09:47Z) - Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Optimization [14.970965673760427]
We propose an online mirror descent algorithm for Tcheche scalarization, which we call OMD-TCH.
We show the effectiveness of OMD-TCH on both synthetic problems and federated learning tasks under fairness constraints.
arXiv Detail & Related papers (2024-10-29T05:58:33Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Riemannian Projection-free Online Learning [5.918057694291832]
The projection operation is a critical component in a range of optimization algorithms, such as online gradient descent (OGD)
It suffers from computational limitations in high-dimensional settings or when dealing with ill-conditioned constraint sets.
This paper presents methods for obtaining sub-linear regret guarantees in online geodesically convex optimization on curved spaces.
arXiv Detail & Related papers (2023-05-30T18:22:09Z) - Mirror Descent Maximizes Generalized Margin and Can Be Implemented
Efficiently [34.438887960077025]
We show that $p$-$textsfGD$ can noticeably affect the structure and the generalization performance of the learned models.
We also show that $p$-$textsfGD$ is fully parallel in the same manner as SGD and can be used to train deep neural networks.
arXiv Detail & Related papers (2022-05-25T14:33:13Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - A Unified Analysis Method for Online Optimization in Normed Vector Space [3.615256443481602]
We present a unified analysis method that relies on the generalized cosine rule for online optimization in normed vector space.
We extend online convex to online monotone optimization, by replacing losses online game with monotone operators.
arXiv Detail & Related papers (2021-12-22T18:48:19Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - Universal Online Convex Optimization Meets Second-order Bounds [74.0120666722487]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses.
In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds.
arXiv Detail & Related papers (2021-05-08T11:43:49Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.