An Improved Relaxation for Oracle-Efficient Adversarial Contextual
Bandits
- URL: http://arxiv.org/abs/2310.19025v2
- Date: Fri, 10 Nov 2023 16:14:10 GMT
- Title: An Improved Relaxation for Oracle-Efficient Adversarial Contextual
Bandits
- Authors: Kiarash Banihashem, MohammadTaghi Hajiaghayi, Suho Shin, Max Springer
- Abstract summary: We present an oracle-efficient relaxation for the adversarial contextual bandits problem.
Our algorithm has a regret bound of $O(Tfrac23(Klog(|Pi|))frac13)$ and makes at most $O(K)$ calls per round to an offline optimization.
- Score: 14.058574632553547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an oracle-efficient relaxation for the adversarial contextual
bandits problem, where the contexts are sequentially drawn i.i.d from a known
distribution and the cost sequence is chosen by an online adversary. Our
algorithm has a regret bound of
$O(T^{\frac{2}{3}}(K\log(|\Pi|))^{\frac{1}{3}})$ and makes at most $O(K)$ calls
per round to an offline optimization oracle, where $K$ denotes the number of
actions, $T$ denotes the number of rounds and $\Pi$ denotes the set of
policies. This is the first result to improve the prior best bound of
$O((TK)^{\frac{2}{3}}(\log(|\Pi|))^{\frac{1}{3}})$ as obtained by Syrgkanis et
al. at NeurIPS 2016, and the first to match the original bound of Langford and
Zhang at NeurIPS 2007 which was obtained for the stochastic case.
Related papers
- Continuum-armed Bandit Optimization with Batch Pairwise Comparison Oracles [14.070618685107645]
We study a bandit optimization problem where the goal is to maximize a function $f(x)$ over $T$ periods.<n>We show that such a pairwise comparison finds important applications to joint pricing and inventory replenishment problems.
arXiv Detail & Related papers (2025-05-28T13:41:00Z) - Federated Combinatorial Multi-Agent Multi-Armed Bandits [79.1700188160944]
This paper introduces a federated learning framework tailored for online optimization with bandit.
In this setting, agents subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals.
arXiv Detail & Related papers (2024-05-09T17:40:09Z) - Best-of-Both-Worlds Algorithms for Linear Contextual Bandits [11.94312915280916]
We study best-of-both-worlds algorithms for $K$-armed linear contextual bandits.
Our algorithms deliver near-optimal regret bounds in both the adversarial and adversarial regimes.
arXiv Detail & Related papers (2023-12-24T08:27:30Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - Context-lumpable stochastic bandits [49.024050919419366]
We consider a contextual bandit problem with $S$ contexts and $K$ actions.
We give an algorithm that outputs an $epsilon$-optimal policy after using at most $widetilde O(r (S +K )/epsilon2)$ samples.
In the regret setting, we give an algorithm whose cumulative regret up to time $T$ is bounded by $widetilde O(sqrtr3(S+K)T)$.
arXiv Detail & Related papers (2023-06-22T17:20:30Z) - First- and Second-Order Bounds for Adversarial Linear Contextual Bandits [22.367921675238318]
We consider the adversarial linear contextual bandit setting, which allows for the loss functions associated with each of $K$ arms to change over time without restriction.
Since $V_T$ or $L_T*$ may be significantly smaller than $T$, these improve over the worst-case regret whenever the environment is relatively benign.
arXiv Detail & Related papers (2023-05-01T14:00:15Z) - Optimal Clustering with Noisy Queries via Multi-Armed Bandit [19.052525950282234]
Motivated by many applications, we study clustering with a faulty oracle.
We propose a new time algorithm with $O(fracn)delta2 + textpoly(k,frac1delta, log n)$ queries.
Our main ingredient is an interesting connection between our problem and multi-armed bandit.
arXiv Detail & Related papers (2022-07-12T08:17:29Z) - Towards Painless Policy Optimization for Constrained MDPs [46.12526917024248]
We study policy optimization in an infinite horizon, $gamma$-discounted constrained Markov decision process (CMDP)
Our objective is to return a policy that achieves large expected reward with a small constraint violation.
We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms.
arXiv Detail & Related papers (2022-04-11T15:08:09Z) - Corralling a Larger Band of Bandits: A Case Study on Switching Regret
for Linear Bandits [99.86860277006318]
We consider the problem of combining and learning over a set of adversarial algorithms with the goal of adaptively tracking the best one on the fly.
The CORRAL of Agarwal et al. achieves this goal with a regret overhead of order $widetildeO(sqrtd S T)$ where $M$ is the number of base algorithms and $T$ is the time horizon.
Motivated by this issue, we propose a new recipe to corral a larger band of bandit algorithms whose regret overhead has only emphlogarithmic dependence on $M$ as long
arXiv Detail & Related papers (2022-02-12T21:55:44Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Second-Order Information in Non-Convex Stochastic Optimization: Power
and Limitations [54.42518331209581]
We find an algorithm which finds.
epsilon$-approximate stationary point (with $|nabla F(x)|le epsilon$) using.
$(epsilon,gamma)$surimate random random points.
Our lower bounds here are novel even in the noiseless case.
arXiv Detail & Related papers (2020-06-24T04:41:43Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z) - Efficient and Robust Algorithms for Adversarial Linear Contextual
Bandits [13.32560004325655]
We consider an adversarial variant of the classic $K$-armed linear contextual bandit problem.
Under the assumption that the $d$-dimensional contexts are generated i.i.d.at random, we develop efficient algorithms based on the classic Exp3 algorithm.
arXiv Detail & Related papers (2020-02-01T22:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.