Contextual Bandits with Smooth Regret: Efficient Learning in Continuous
Action Spaces
- URL: http://arxiv.org/abs/2207.05849v1
- Date: Tue, 12 Jul 2022 21:27:09 GMT
- Title: Contextual Bandits with Smooth Regret: Efficient Learning in Continuous
Action Spaces
- Authors: Yinglun Zhu, Paul Mineiro
- Abstract summary: We design efficient general-purpose contextual bandit algorithms for large -- or even continuous -- action spaces.
We propose a smooth regret notion for contextual bandits, which dominates previously proposed alternatives.
Our algorithms can be used to recover the previous minimax/Pareto optimal guarantees under the standard regret.
- Score: 14.366265951396587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing efficient general-purpose contextual bandit algorithms that work
with large -- or even continuous -- action spaces would facilitate application
to important scenarios such as information retrieval, recommendation systems,
and continuous control. While obtaining standard regret guarantees can be
hopeless, alternative regret notions have been proposed to tackle the large
action setting. We propose a smooth regret notion for contextual bandits, which
dominates previously proposed alternatives. We design a statistically and
computationally efficient algorithm -- for the proposed smooth regret -- that
works with general function approximation under standard supervised oracles. We
also present an adaptive algorithm that automatically adapts to any smoothness
level. Our algorithms can be used to recover the previous minimax/Pareto
optimal guarantees under the standard regret, e.g., in bandit problems with
multiple best arms and Lipschitz/H{\"o}lder bandits. We conduct large-scale
empirical evaluations demonstrating the efficacy of our proposed algorithms.
Related papers
- On Adaptivity in Non-stationary Stochastic Optimization With Bandit
Feedback [11.208914594208654]
We show that, when aggregated function changes is known a priori, a simple re-starting algorithm attains the optimal dynamic regret.
We also establish an additional result showing that any algorithm achieving good regret against stationary benchmarks with high probability could be automatically converted to an algorithm that achieves good regret against dynamic benchmarks.
arXiv Detail & Related papers (2022-10-11T16:16:34Z) - Non-Stationary Bandits under Recharging Payoffs: Improved Planning with
Sublinear Regret [34.44347218903429]
The multi-armed bandit setting has been recently studied in the non-stationary regime.
Mean payoff of each action is a non-decreasing function of the number of rounds passed since it was last played.
We show how our algorithm can be transformed into a bandit algorithm with sublinear regret.
arXiv Detail & Related papers (2022-05-29T23:55:36Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks [81.13338949407205]
Recent works show that optimal bandit algorithms are vulnerable to adversarial attacks and can fail completely in the presence of attacks.
Existing robust bandit algorithms only work for the non-contextual setting under the attack of rewards.
We provide the first robust bandit algorithm for linear contextual bandit setting under a fully adaptive and omniscient attack.
arXiv Detail & Related papers (2021-06-05T22:20:34Z) - Adapting to misspecification in contextual bandits with offline
regression oracles [7.312170216336086]
We propose a family of contextual bandit algorithms that adapt to misspecification error by reverting to a good safe policy.
Our algorithm requires only an offline regression oracle to ensure regret guarantees that gracefully degrade in terms of a measure of the average misspecification level.
arXiv Detail & Related papers (2021-02-26T00:15:04Z) - Upper Confidence Bounds for Combining Stochastic Bandits [52.10197476419621]
We provide a simple method to combine bandit algorithms.
Our approach is based on a "meta-UCB" procedure that treats each of $N$ individual bandit algorithms as arms in a higher-level $N$-armed bandit problem.
arXiv Detail & Related papers (2020-12-24T05:36:29Z) - Experimental Design for Regret Minimization in Linear Bandits [19.8309784360219]
We propose a novel design-based algorithm to minimize regret in online linear and bandits.
We provide state-of-the-art finite time regret guarantees and show that our algorithm can be applied in both the bandit and semi-bandit feedback regime.
arXiv Detail & Related papers (2020-11-01T17:59:19Z) - Adaptive Discretization for Adversarial Lipschitz Bandits [85.39106976861702]
Lipschitz bandits is a prominent version of multi-armed bandits that studies large, structured action spaces.
A central theme here is the adaptive discretization of the action space, which gradually zooms in'' on the more promising regions.
We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds.
arXiv Detail & Related papers (2020-06-22T16:06:25Z) - Beyond UCB: Optimal and Efficient Contextual Bandits with Regression
Oracles [112.89548995091182]
We provide the first universal and optimal reduction from contextual bandits to online regression.
Our algorithm requires no distributional assumptions beyond realizability, and works even when contexts are chosen adversarially.
arXiv Detail & Related papers (2020-02-12T11:33:46Z) - Bandit algorithms to emulate human decision making using probabilistic
distortions [20.422725678982726]
We formulate two sample multi-armed bandit problems with distorted probabilities on the reward distributions.
We consider the aforementioned problems in the regret minimization as well as best arm identification framework for multi-armed bandits.
arXiv Detail & Related papers (2016-11-30T17:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.