Optimistic Online Convex Optimization in Dynamic Environments
- URL: http://arxiv.org/abs/2203.14520v1
- Date: Mon, 28 Mar 2022 06:22:05 GMT
- Title: Optimistic Online Convex Optimization in Dynamic Environments
- Authors: Qing-xin Meng, Jian-wei Liu
- Abstract summary: We replace Greedy Projection (GP) and Normalized Exponentiated Subgradient (NES) in Ader with Optimistic-GP and Optimistic-NES respectively, and name the corresponding algorithm ONES-OGP.
We extend the doubling trick to the adaptive trick, and introduce three characteristic terms naturally arise from optimism, namely $M_T$, $widetildeM_T$ and $V_T+1_L2rholeft(rho+2 P_Tright)leqslantvarrho2 V_TD
- Score: 3.2721751217329977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the optimistic online convex optimization problem in
dynamic environments. Existing works have shown that Ader enjoys an
$O\left(\sqrt{\left(1+P_T\right)T}\right)$ dynamic regret upper bound, where
$T$ is the number of rounds, and $P_T$ is the path length of the reference
strategy sequence. However, Ader is not environment-adaptive. Based on the fact
that optimism provides a framework for implementing environment-adaptive, we
replace Greedy Projection (GP) and Normalized Exponentiated Subgradient (NES)
in Ader with Optimistic-GP and Optimistic-NES respectively, and name the
corresponding algorithm ONES-OGP. We also extend the doubling trick to the
adaptive trick, and introduce three characteristic terms naturally arise from
optimism, namely $M_T$, $\widetilde{M}_T$ and $V_T+1_{L^2\rho\left(\rho+2
P_T\right)\leqslant\varrho^2 V_T}D_T$, to replace the dependence of the dynamic
regret upper bound on $T$. We elaborate ONES-OGP with adaptive trick and its
subgradient variation version, all of which are environment-adaptive.
Related papers
- Adaptivity and Universality: Problem-dependent Universal Regret for Online Convex Optimization [64.88607416000376]
We introduce UniGrad, a novel approach that achieves both universality and adaptivity, with two distinct realizations: UniGrad.Correct and UniGrad.Bregman.<n>Both methods achieve universal regret guarantees that adapt to gradient variation, simultaneously attaining $mathcalO(log V_T)$ regret for strongly convex functions and $mathcalO(d log V_T)$ regret for exp-concave functions.
arXiv Detail & Related papers (2025-11-25T05:23:10Z) - Online Linear Regression in Dynamic Environments via Discounting [38.15622843661145]
We develop algorithms for online linear regression which achieve optimal static and dynamic regret guarantees.
We show that a discounted variant of the Vovk-Azoury-Warmuth forecaster achieves dynamic regret of the form $R_T(vecu)le Oleft(dlog(T)vee sqrtdP_Tgamma(vecu)$, where $P_Tgamma(vecu)$ is a measure of variability of the comparator sequence, and show that the
arXiv Detail & Related papers (2024-05-29T15:17:53Z) - Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring [41.20252349191698]
Partial monitoring is a generic framework of online decision-making problems with limited feedback.<n>We present a new framework and analysis for ExO with a hybrid regularizer.<n>In particular, we derive a regret bound of $O(sum_a neq a*)$, where $k$, $m$, and $T$ are the numbers of actions, observations and rounds.
arXiv Detail & Related papers (2024-02-13T09:34:22Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Non-stationary Online Convex Optimization with Arbitrary Delays [50.46856739179311]
This paper investigates the delayed online convex optimization (OCO) in non-stationary environments.
We first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order.
We develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to $O(sqrtbardT(P_T+1))$.
arXiv Detail & Related papers (2023-05-20T07:54:07Z) - Best-of-Three-Worlds Linear Bandit Algorithm with Variance-Adaptive
Regret Bounds [27.92755687977196]
The paper proposes a linear bandit algorithm that is adaptive to environments at two different levels of hierarchy.
At the higher level, the proposed algorithm adapts to a variety of types of environments.
The proposed algorithm has data-dependent regret bounds that depend on all of the cumulative loss for the optimal action.
arXiv Detail & Related papers (2023-02-24T00:02:24Z) - Improved Dynamic Regret for Online Frank-Wolfe [54.690867216880356]
We investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization.
In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization.
arXiv Detail & Related papers (2023-02-11T07:19:51Z) - Second Order Path Variationals in Non-Stationary Online Learning [23.91519151164528]
We show that appropriately designed Strongly Adaptive algorithms achieve a dynamic regret of $tilde O(d2 n1/5 C_n2/5 vee d2)$.
The aforementioned dynamic regret rate is shown to be optimal modulo dimension dependencies and poly-logarithmic factors of $n$.
arXiv Detail & Related papers (2022-05-04T07:14:39Z) - A Unified Analysis Method for Online Optimization in Normed Vector Space [3.615256443481602]
We present a unified analysis method that relies on the generalized cosine rule for online optimization in normed vector space.
We extend online convex to online monotone optimization, by replacing losses online game with monotone operators.
arXiv Detail & Related papers (2021-12-22T18:48:19Z) - Optimistic Policy Optimization is Provably Efficient in Non-stationary
MDPs [45.6318149525364]
We study episodic reinforcement learning (RL) in non-stationary linear kernel Markov decision processes (MDPs)
We propose the $underlinetextp$eriodically $underlinetextr$estarted $underlinetexto$ptimistic $underlinetextp$olicy $underlinetexto$ptimization algorithm (PROPO)
arXiv Detail & Related papers (2021-10-18T02:33:20Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Nearly Minimax Optimal Reward-free Reinforcement Learning [88.75843804630772]
We study the reward-free reinforcement learning framework, which is particularly suitable for batch reinforcement learning and scenarios where one needs policies for multiple reward functions.
We give a new efficient algorithm, textbfStaged textbfSampling + textbfTruncated textbfPlanning (algoname), which interacts with the environment at most $Oleft( fracS2Aepsilon2textpolylogleft(fracSAHepsilon2
arXiv Detail & Related papers (2020-10-12T17:51:19Z) - Dynamic Regret of Convex and Smooth Functions [93.71361250701075]
We investigate online convex optimization in non-stationary environments.
We choose the dynamic regret as the performance measure.
We show that it is possible to further enhance the dynamic regret by exploiting the smoothness condition.
arXiv Detail & Related papers (2020-07-07T14:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.