Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization
- URL: http://arxiv.org/abs/2102.07387v1
- Date: Mon, 15 Feb 2021 08:16:51 GMT
- Title: Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization
- Authors: Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain
- Abstract summary: We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
- Score: 51.23789922123412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study online learning with bandit feedback (i.e. learner has access to
only zeroth-order oracle) where cost/reward functions $\f_t$ admit a
"pseudo-1d" structure, i.e. $\f_t(\w) = \loss_t(\pred_t(\w))$ where the output
of $\pred_t$ is one-dimensional. At each round, the learner observes context
$\x_t$, plays prediction $\pred_t(\w_t; \x_t)$ (e.g. $\pred_t(\cdot)=\langle
\x_t, \cdot\rangle$) for some $\w_t \in \mathbb{R}^d$ and observes loss
$\loss_t(\pred_t(\w_t))$ where $\loss_t$ is a convex Lipschitz-continuous
function. The goal is to minimize the standard regret metric. This pseudo-1d
bandit convex optimization problem (\SBCO) arises frequently in domains such as
online decision-making or parameter-tuning in large systems. For this problem,
we first show a lower bound of $\min(\sqrt{dT}, T^{3/4})$ for the regret of any
algorithm, where $T$ is the number of rounds. We propose a new algorithm
\sbcalg that combines randomized online gradient descent with a kernelized
exponential weights method to exploit the pseudo-1d structure effectively,
guaranteeing the {\em optimal} regret bound mentioned above, up to additional
logarithmic factors. In contrast, applying state-of-the-art online convex
optimization methods leads to
$\tilde{O}\left(\min\left(d^{9.5}\sqrt{T},\sqrt{d}T^{3/4}\right)\right)$
regret, that is significantly suboptimal in $d$.
Related papers
- Nearly Optimal Regret for Decentralized Online Convex Optimization [53.433398074919]
Decentralized online convex optimization (D-OCO) aims to minimize a sequence of global loss functions using only local computations and communications.
We develop novel D-OCO algorithms that can respectively reduce the regret bounds for convex and strongly convex functions.
Our algorithms are nearly optimal in terms of $T$, $n$, and $rho$.
arXiv Detail & Related papers (2024-02-14T13:44:16Z) - Dueling Optimization with a Monotone Adversary [35.850072415395395]
We study the problem of dueling optimization with a monotone adversary, which is a generalization of (noiseless) dueling convex optimization.
The goal is to design an online algorithm to find a minimizer $mathbfx*$ for a function $fcolon X to mathbbRd.
arXiv Detail & Related papers (2023-11-18T23:55:59Z) - Near-Optimal Algorithms for Private Online Optimization in the
Realizable Regime [74.52487417350221]
We consider online learning problems in the realizable setting, where there is a zero-loss solution.
We propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds.
arXiv Detail & Related papers (2023-02-27T21:19:24Z) - Logarithmic Regret from Sublinear Hints [76.87432703516942]
We show that an algorithm can obtain $O(log T)$ regret with just $O(sqrtT)$ hints under a natural query model.
We also show that $o(sqrtT)$ hints cannot guarantee better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2021-11-09T16:50:18Z) - Contextual Recommendations and Low-Regret Cutting-Plane Algorithms [49.91214213074933]
We consider the following variant of contextual linear bandits motivated by routing applications in navigational engines and recommendation systems.
We design novel cutting-plane algorithms with low "regret" -- the total distance between the true point $w*$ and the hyperplanes the separation oracle returns.
arXiv Detail & Related papers (2021-06-09T05:39:05Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Adaptive Online Learning with Varying Norms [45.11667443216861]
We provide an online convex optimization algorithm that outputs points $w_t$ in some domain $W$.
We apply this result to obtain new "full-matrix"-style regret bounds.
arXiv Detail & Related papers (2020-02-10T17:22:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.