Model Selection for Generic Contextual Bandits
- URL: http://arxiv.org/abs/2107.03455v2
- Date: Thu, 20 Jul 2023 11:54:22 GMT
- Title: Model Selection for Generic Contextual Bandits
- Authors: Avishek Ghosh, Abishek Sankararaman and Kannan Ramchandran
- Abstract summary: We propose a refinement based algorithm called Adaptive Contextual Bandit (ttfamily ACB)
We prove that this algorithm is adaptive, i.e., the regret rate order-wise matches that of any provable contextual bandit algorithm.
We also show that a much simpler explore-then-commit (ETC) style algorithm also obtains similar regret bound, despite not knowing the true model class.
- Score: 20.207989166682832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of model selection for the general stochastic
contextual bandits under the realizability assumption. We propose a successive
refinement based algorithm called Adaptive Contextual Bandit ({\ttfamily ACB}),
that works in phases and successively eliminates model classes that are too
simple to fit the given instance. We prove that this algorithm is adaptive,
i.e., the regret rate order-wise matches that of any provable contextual bandit
algorithm (ex. \cite{falcon}), that needs the knowledge of the true model
class. The price of not knowing the correct model class turns out to be only an
additive term contributing to the second order term in the regret bound. This
cost possess the intuitive property that it becomes smaller as the model class
becomes easier to identify, and vice-versa. We also show that a much simpler
explore-then-commit (ETC) style algorithm also obtains similar regret bound,
despite not knowing the true model class. However, the cost of model selection
is higher in ETC as opposed to in {\ttfamily ACB}, as expected. Furthermore,
for the special case of linear contextual bandits, we propose specialized
algorithms that obtain sharper guarantees compared to the generic setup.
Related papers
- Bad Values but Good Behavior: Learning Highly Misspecified Bandits and
MDPs [16.777565006843012]
Parametric, feature-based reward models are employed by a variety of algorithms in decision-making settings such as bandits and Markov decision processes (MDPs)
We show that basic algorithms such as $epsilon$-greedy, LinUCB and fitted Q-learning provably learn optimal policies under even highly misspecified models.
This is in contrast to existing worst-case results for, say misspecified bandits, which show regret bounds that scale linearly with time, and shows that there can be a nontrivially large set of bandit instances that are robust to misspecification.
arXiv Detail & Related papers (2023-10-13T18:53:30Z) - Anytime Model Selection in Linear Bandits [61.97047189786905]
We develop ALEXP, which has an exponentially improved dependence on $M$ for its regret.
Our approach utilizes a novel time-uniform analysis of the Lasso, establishing a new connection between online learning and high-dimensional statistics.
arXiv Detail & Related papers (2023-07-24T15:44:30Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Fast Feature Selection with Fairness Constraints [49.142308856826396]
We study the fundamental problem of selecting optimal features for model construction.
This problem is computationally challenging on large datasets, even with the use of greedy algorithm variants.
We extend the adaptive query model, recently proposed for the greedy forward selection for submodular functions, to the faster paradigm of Orthogonal Matching Pursuit for non-submodular functions.
The proposed algorithm achieves exponentially fast parallel run time in the adaptive query model, scaling much better than prior work.
arXiv Detail & Related papers (2022-02-28T12:26:47Z) - Universal and data-adaptive algorithms for model selection in linear
contextual bandits [52.47796554359261]
We consider the simplest non-trivial instance of model-selection: distinguishing a simple multi-armed bandit problem from a linear contextual bandit problem.
We introduce new algorithms that explore in a data-adaptive manner and provide guarantees of the form $mathcalO(dalpha T1- alpha)$.
Our approach extends to model selection among nested linear contextual bandits under some additional assumptions.
arXiv Detail & Related papers (2021-11-08T18:05:35Z) - Near Instance Optimal Model Selection for Pure Exploration Linear
Bandits [20.67688737534517]
We study the model selection problem in the pure exploration linear bandit setting.
Our goal is to automatically adapt to the instance-dependent complexity measure of the smallest hypothesis class.
Our algorithms define a new optimization problem based on experimental design.
arXiv Detail & Related papers (2021-09-10T22:56:58Z) - Towards Costless Model Selection in Contextual Bandits: A Bias-Variance
Perspective [7.318831153179727]
We study the feasibility of similar guarantees for cumulative regret minimization in the contextual bandit setting.
Our algorithm is based on a novel misspecification test, and our analysis demonstrates the benefits of using model selection for reward estimation.
arXiv Detail & Related papers (2021-06-11T16:08:03Z) - Pareto Optimal Model Selection in Linear Bandits [15.85873315624132]
We study a model selection problem in the linear bandit setting, where the learner must adapt to the dimension of the optimal hypothesis class on the fly.
In this paper, we first establish a lower bound showing that, even with a fixed action set, adaptation to the unknown intrinsic dimension $d_star$ comes at a cost.
arXiv Detail & Related papers (2021-02-12T16:02:06Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.