A Unified Analysis Method for Online Optimization in Normed Vector Space
- URL: http://arxiv.org/abs/2112.12134v1
- Date: Wed, 22 Dec 2021 18:48:19 GMT
- Title: A Unified Analysis Method for Online Optimization in Normed Vector Space
- Authors: Qingxin Meng, Jianwei Liu
- Abstract summary: We present a unified analysis method that relies on the generalized cosine rule for online optimization in normed vector space.
We extend online convex to online monotone optimization, by replacing losses online game with monotone operators.
- Score: 3.615256443481602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a unified analysis method that relies on the generalized cosine
rule and $\phi$-convex for online optimization in normed vector space using
dynamic regret as the performance metric. In combing the update rules, we start
with strategy $S$ (a two-parameter variant strategy covering Optimistic-FTRL
with surrogate linearized losses), and obtain $S$-I (type-I relaxation variant
form of $S$) and $S$-II (type-II relaxation variant form of $S$, which is
Optimistic-MD) by relaxation. Regret bounds for $S$-I and $S$-II are the
tightest possible. As instantiations, regret bounds of normalized exponentiated
subgradient and greedy/lazy projection are better than the currently known
optimal results. We extend online convex optimization to online monotone
optimization, by replacing losses of online game with monotone operators and
extending the definition of regret, namely regret$^n$, and expand the
application scope of $S$-I and $S$-II.
Related papers
- Small Gradient Norm Regret for Online Convex Optimization [19.699405661554845]
We show that the $Gstar$ regret strictly refines the existing $Lstar$ (small loss) regret.<n>We extend our results to dynamic regret and bandit settings.
arXiv Detail & Related papers (2026-01-20T02:11:01Z) - A Simple, Optimal and Efficient Algorithm for Online Exp-Concave Optimization [72.1530234801628]
Online Newton Step (ONS) is a fundamental problem in online learning.<n>ONS faces a computational bottleneck due to the Mahalanobis projections at each round.<n>This paper proposes a simple variant of ONS, LightONS, which reduces the total runtime to $O(normd2 T + dsqrtTlog) while preserving an optimal regret.<n>This design enables LightONS to serve as an efficient plug-in replacement of ONS in broader scenarios.
arXiv Detail & Related papers (2025-12-29T03:59:51Z) - Parameter-free Algorithms for the Stochastically Extended Adversarial Model [59.81852138768642]
Existing approaches for the Extended Adversarial (SEA) model require prior knowledge of problem-specific parameters, such as the diameter of the domain $D$ and the Lipschitz constant of the loss functions $G$.<n>We develop parameter-free methods by leveraging the Optimistic Online Newton Step (OONS) algorithm to eliminate the need for these parameters.
arXiv Detail & Related papers (2025-10-06T10:53:37Z) - On the necessity of adaptive regularisation:Optimal anytime online learning on $\oldsymbol{\ell_p}$-balls [17.599348264171862]
We study online convex optimization on $ell_p$-balls in $mathbbRd$ for $p > 2$.<n>While always sub-linear, the optimal regret exhibits a shift between the high-dimensional setting ($d > T$) and the low-dimensional setting ($d leq T$)
arXiv Detail & Related papers (2025-06-24T16:06:56Z) - FedSVD: Adaptive Orthogonalization for Private Federated Learning with LoRA [68.44043212834204]
Low-Rank Adaptation (LoRA) is widely used for efficient fine-tuning of language models in learning (FL)<n>Low-Rank Adaptation (LoRA) is widely used for efficient fine-tuning of language models in learning (FL)
arXiv Detail & Related papers (2025-05-19T07:32:56Z) - Decentralized Projection-free Online Upper-Linearizable Optimization with Applications to DR-Submodular Optimization [29.705337940879705]
We introduce a novel framework for decentralized projection-free optimization.
We leverage decentralized optimization techniques with the flexibility of upper-linearizable function frameworks.
arXiv Detail & Related papers (2025-01-30T07:28:34Z) - Iterative Reweighted Least Squares Networks With Convergence Guarantees
for Solving Inverse Imaging Problems [12.487990897680422]
We present a novel optimization strategy for image reconstruction tasks under analysis-based image regularization.
We parameterize such regularizers using potential functions that correspond to weighted extensions of the $ell_pp$-vector and $mathcalS_pp$ Schatten-matrix quasi-norms.
We show that thanks to the convergence guarantees of our proposed minimization strategy, such optimization can be successfully performed with a memory-efficient implicit back-propagation scheme.
arXiv Detail & Related papers (2023-08-10T17:59:46Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Near-Optimal Algorithms for Private Online Optimization in the
Realizable Regime [74.52487417350221]
We consider online learning problems in the realizable setting, where there is a zero-loss solution.
We propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds.
arXiv Detail & Related papers (2023-02-27T21:19:24Z) - Improved Dynamic Regret for Online Frank-Wolfe [54.690867216880356]
We investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization.
In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization.
arXiv Detail & Related papers (2023-02-11T07:19:51Z) - Optimal Stochastic Non-smooth Non-convex Optimization through
Online-to-Non-convex Conversion [56.92236659731376]
We present new algorithms for optimizing non-known, non-smooth objectives based on a novel analysis technique.
For deterministic second-order smooth objectives, applying advanced optimistic online learning techniques enables a new $O(delta0.5) all$ to recover optimal or best-known results.
arXiv Detail & Related papers (2023-02-07T22:09:20Z) - Differentially Private Online-to-Batch for Smooth Losses [38.23708749658059]
We develop a new reduction that converts any online convex optimization algorithm suffering $O(sqrtT)$ regret into an $epsilon$-differentially private convex algorithm with the optimal convergence rate $tilde O(sqrtT + sqrtd/epsilon T)$ on smooth losses in linear time.
arXiv Detail & Related papers (2022-10-12T21:13:31Z) - Optimistic Online Convex Optimization in Dynamic Environments [3.2721751217329977]
We replace Greedy Projection (GP) and Normalized Exponentiated Subgradient (NES) in Ader with Optimistic-GP and Optimistic-NES respectively, and name the corresponding algorithm ONES-OGP.
We extend the doubling trick to the adaptive trick, and introduce three characteristic terms naturally arise from optimism, namely $M_T$, $widetildeM_T$ and $V_T+1_L2rholeft(rho+2 P_Tright)leqslantvarrho2 V_TD
arXiv Detail & Related papers (2022-03-28T06:22:05Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Revisiting Smoothed Online Learning [70.09792747315323]
We investigate the problem of smoothed online learning, in which the online learner suffers both a hitting cost and a switching cost.
To bound the competitive ratio, we assume the hitting cost is known to the learner in each round, and investigate the greedy algorithm which simply minimizes the weighted sum of the hitting cost and the switching cost.
arXiv Detail & Related papers (2021-02-13T14:15:55Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.