Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear
Bandit Algorithms
- URL: http://arxiv.org/abs/2211.05632v2
- Date: Sat, 27 May 2023 00:24:29 GMT
- Title: Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear
Bandit Algorithms
- Authors: Osama A. Hanna, Lin F. Yang, Christina Fragouli
- Abstract summary: We address the contextual linear bandit problem, where a decision maker is provided a context.
We show that the contextual problem can be solved as a linear bandit problem.
Our results imply a $O(dsqrtTlog T)$ high-probability regret bound for contextual linear bandits.
- Score: 39.70492757288025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we address the stochastic contextual linear bandit problem,
where a decision maker is provided a context (a random set of actions drawn
from a distribution). The expected reward of each action is specified by the
inner product of the action and an unknown parameter. The goal is to design an
algorithm that learns to play as close as possible to the unknown optimal
policy after a number of action plays. This problem is considered more
challenging than the linear bandit problem, which can be viewed as a contextual
bandit problem with a \emph{fixed} context. Surprisingly, in this paper, we
show that the stochastic contextual problem can be solved as if it is a linear
bandit problem. In particular, we establish a novel reduction framework that
converts every stochastic contextual linear bandit instance to a linear bandit
instance, when the context distribution is known. When the context distribution
is unknown, we establish an algorithm that reduces the stochastic contextual
instance to a sequence of linear bandit instances with small misspecifications
and achieves nearly the same worst-case regret bound as the algorithm that
solves the misspecified linear bandit instances.
As a consequence, our results imply a $O(d\sqrt{T\log T})$ high-probability
regret bound for contextual linear bandits, making progress in resolving an
open problem in (Li et al., 2019), (Li et al., 2021).
Our reduction framework opens up a new way to approach stochastic contextual
linear bandit problems, and enables improved regret bounds in a number of
instances including the batch setting, contextual bandits with
misspecifications, contextual bandits with sparse unknown parameters, and
contextual bandits with adversarial corruption.
Related papers
- On the Optimal Regret of Locally Private Linear Contextual Bandit [18.300225068036642]
We show that it is possible to achieve an $tilde O(sqrtT)$ regret upper bound for locally private linear contextual bandit.
Our solution relies on several new algorithmic and analytical ideas.
arXiv Detail & Related papers (2024-04-15T02:00:24Z) - Feel-Good Thompson Sampling for Contextual Dueling Bandits [49.450050682705026]
We propose a Thompson sampling algorithm, named FGTS.CDB, for linear contextual dueling bandits.
At the core of our algorithm is a new Feel-Good exploration term specifically tailored for dueling bandits.
Our algorithm achieves nearly minimax-optimal regret, i.e., $tildemathcalO(dsqrt T)$, where $d$ is the model dimension and $T$ is the time horizon.
arXiv Detail & Related papers (2024-04-09T04:45:18Z) - Optimal cross-learning for contextual bandits with unknown context
distributions [28.087360479901978]
We consider the problem of designing contextual bandit algorithms in the cross-learning'' setting of Balseiro et al.
We provide an efficient algorithm with a nearly tight (up to logarithmic factors) regret bound of $widetildeO(sqrtTK)$, independent of the number of contexts.
At the core of our algorithm is a novel technique for coordinating the execution of an algorithm over multiple epochs.
arXiv Detail & Related papers (2024-01-03T18:02:13Z) - Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
Contextual Bandits and Markov Decision Processes [59.61248760134937]
We propose an efficient algorithm to achieve a regret of $tildeO(sqrtT+zeta)$.
The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit.
We generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $zeta$.
arXiv Detail & Related papers (2022-12-12T15:04:56Z) - Complete Policy Regret Bounds for Tallying Bandits [51.039677652803675]
Policy regret is a well established notion of measuring the performance of an online learning algorithm against an adaptive adversary.
We study restrictions on the adversary that enable efficient minimization of the emphcomplete policy regret
We provide an algorithm that w.h.p a complete policy regret guarantee of $tildemathcalO(mKsqrtT)$, where the $tildemathcalO$ notation hides only logarithmic factors.
arXiv Detail & Related papers (2022-04-24T03:10:27Z) - Stochastic Linear Bandits Robust to Adversarial Attacks [117.665995707568]
We provide two variants of a Robust Phased Elimination algorithm, one that knows $C$ and one that does not.
We show that both variants attain near-optimal regret in the non-corrupted case $C = 0$, while incurring additional additive terms respectively.
In a contextual setting, we show that a simple greedy algorithm is provably robust with a near-optimal additive regret term, despite performing no explicit exploration and not knowing $C$.
arXiv Detail & Related papers (2020-07-07T09:00:57Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z) - Bypassing the Monster: A Faster and Simpler Optimal Algorithm for
Contextual Bandits under Realizability [18.45278329799526]
We design a fast and simple algorithm that achieves the statistically optimal regret with only $O(log T)$ calls to an offline regression oracle.
Our results provide the first universal and optimal reduction from contextual bandits to offline regression.
arXiv Detail & Related papers (2020-03-28T04:16:52Z) - Contextual Blocking Bandits [35.235375147227124]
We study a novel variant of the multi-armed bandit problem, where at each time step, the player observes an independently sampled context that determines the arms' mean rewards.
Playing an arm blocks it (across all contexts) for a fixed and known number of future time steps.
We propose a UCB-based variant of the full-information algorithm that guarantees a $mathcalO(log T)$-regret w.r.t. an $alpha$regret strategy in $T time steps, matching the $Omega(log(T)$ lower bound
arXiv Detail & Related papers (2020-03-06T20:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.