Adapting to Misspecification in Contextual Bandits
- URL: http://arxiv.org/abs/2107.05745v1
- Date: Mon, 12 Jul 2021 21:30:41 GMT
- Title: Adapting to Misspecification in Contextual Bandits
- Authors: Dylan J. Foster and Claudio Gentile and Mehryar Mohri and Julian
Zimmert
- Abstract summary: We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
- Score: 82.55565343668246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major research direction in contextual bandits is to develop algorithms
that are computationally efficient, yet support flexible, general-purpose
function approximation. Algorithms based on modeling rewards have shown strong
empirical performance, but typically require a well-specified model, and can
fail when this assumption does not hold. Can we design algorithms that are
efficient and flexible, yet degrade gracefully in the face of model
misspecification? We introduce a new family of oracle-efficient algorithms for
$\varepsilon$-misspecified contextual bandits that adapt to unknown model
misspecification -- both for finite and infinite action settings. Given access
to an online oracle for square loss regression, our algorithm attains optimal
regret and -- in particular -- optimal dependence on the misspecification
level, with no prior knowledge. Specializing to linear contextual bandits with
infinite actions in $d$ dimensions, we obtain the first algorithm that achieves
the optimal $O(d\sqrt{T} + \varepsilon\sqrt{d}T)$ regret bound for unknown
misspecification level $\varepsilon$.
On a conceptual level, our results are enabled by a new optimization-based
perspective on the regression oracle reduction framework of Foster and Rakhlin,
which we anticipate will find broader use.
Related papers
- Second Order Methods for Bandit Optimization and Control [34.51425758864638]
We show that our algorithm achieves optimal (in terms of terms of convex functions that we call $kappa$-2020) regret bounds for a large class of convex functions.
We also investigate the adaptation of our second-order bandit algorithm to online convex optimization with memory.
arXiv Detail & Related papers (2024-02-14T04:03:38Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Adapting to misspecification in contextual bandits with offline
regression oracles [7.312170216336086]
We propose a family of contextual bandit algorithms that adapt to misspecification error by reverting to a good safe policy.
Our algorithm requires only an offline regression oracle to ensure regret guarantees that gracefully degrade in terms of a measure of the average misspecification level.
arXiv Detail & Related papers (2021-02-26T00:15:04Z) - Conservative Stochastic Optimization with Expectation Constraints [11.393603788068777]
This paper considers convex optimization problems where the objective and constraint functions involve expectations with respect to the data indices or environmental variables.
Online and efficient approaches for solving such problems have not been widely studied.
We propose a novel conservative optimization algorithm (CSOA) that achieves zero constraint violation and $Oleft(T-frac12right)$ optimality gap.
arXiv Detail & Related papers (2020-08-13T08:56:24Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z) - Beyond UCB: Optimal and Efficient Contextual Bandits with Regression
Oracles [112.89548995091182]
We provide the first universal and optimal reduction from contextual bandits to online regression.
Our algorithm requires no distributional assumptions beyond realizability, and works even when contexts are chosen adversarially.
arXiv Detail & Related papers (2020-02-12T11:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.