Leveraging Good Representations in Linear Contextual Bandits
- URL: http://arxiv.org/abs/2104.03781v1
- Date: Thu, 8 Apr 2021 14:05:31 GMT
- Title: Leveraging Good Representations in Linear Contextual Bandits
- Authors: Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric
and Matteo Pirotta
- Abstract summary: A contextual bandit problem may admit multiple linear representations.
Recent works showed that there exist "good" representations for which constant problem-dependent regret can be achieved.
We show that the regret is indeed never worse than the regret obtained by running LinUCB on the best representation.
- Score: 131.91060536108301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The linear contextual bandit literature is mostly focused on the design of
efficient learning algorithms for a given representation. However, a contextual
bandit problem may admit multiple linear representations, each one with
different characteristics that directly impact the regret of the learning
algorithm. In particular, recent works showed that there exist "good"
representations for which constant problem-dependent regret can be achieved. In
this paper, we first provide a systematic analysis of the different definitions
of "good" representations proposed in the literature. We then propose a novel
selection algorithm able to adapt to the best representation in a set of $M$
candidates. We show that the regret is indeed never worse than the regret
obtained by running LinUCB on the best representation (up to a $\ln M$ factor).
As a result, our algorithm achieves constant regret whenever a "good"
representation is available in the set. Furthermore, we show that the algorithm
may still achieve constant regret by implicitly constructing a "good"
representation, even when none of the initial representations is "good".
Finally, we empirically validate our theoretical findings in a number of
standard contextual bandit problems.
Related papers
- On the Complexity of Representation Learning in Contextual Linear
Bandits [110.84649234726442]
We show that representation learning is fundamentally more complex than linear bandits.
In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set.
arXiv Detail & Related papers (2022-12-19T13:08:58Z) - Scalable Representation Learning in Linear Contextual Bandits with
Constant Regret Guarantees [103.69464492445779]
We propose BanditSRL, a representation learning algorithm that learns a realizable representation with good spectral properties.
We prove that BanditSRL can be paired with any no-regret algorithm and achieve constant regret whenever an HLS representation is available.
arXiv Detail & Related papers (2022-10-24T10:04:54Z) - Contextual Bandits with Smooth Regret: Efficient Learning in Continuous
Action Spaces [14.366265951396587]
We design efficient general-purpose contextual bandit algorithms for large -- or even continuous -- action spaces.
We propose a smooth regret notion for contextual bandits, which dominates previously proposed alternatives.
Our algorithms can be used to recover the previous minimax/Pareto optimal guarantees under the standard regret.
arXiv Detail & Related papers (2022-07-12T21:27:09Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Impact of Representation Learning in Linear Bandits [83.17684841392754]
We study how representation learning can improve the efficiency of bandit problems.
We present a new algorithm which achieves $widetildeO(TsqrtkN + sqrtdkNT)$ regret, where $N$ is the number of rounds we play for each bandit.
arXiv Detail & Related papers (2020-10-13T16:35:30Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.