Towards Costless Model Selection in Contextual Bandits: A Bias-Variance
Perspective
- URL: http://arxiv.org/abs/2106.06483v3
- Date: Mon, 23 Oct 2023 20:17:17 GMT
- Title: Towards Costless Model Selection in Contextual Bandits: A Bias-Variance
Perspective
- Authors: Sanath Kumar Krishnamurthy, Adrienne Margaret Propp, Susan Athey
- Abstract summary: We study the feasibility of similar guarantees for cumulative regret minimization in the contextual bandit setting.
Our algorithm is based on a novel misspecification test, and our analysis demonstrates the benefits of using model selection for reward estimation.
- Score: 7.318831153179727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model selection in supervised learning provides costless guarantees as if the
model that best balances bias and variance was known a priori. We study the
feasibility of similar guarantees for cumulative regret minimization in the
stochastic contextual bandit setting. Recent work [Marinov and Zimmert, 2021]
identifies instances where no algorithm can guarantee costless regret bounds.
Nevertheless, we identify benign conditions where costless model selection is
feasible: gradually increasing class complexity, and diminishing marginal
returns for best-in-class policy value with increasing class complexity. Our
algorithm is based on a novel misspecification test, and our analysis
demonstrates the benefits of using model selection for reward estimation.
Unlike prior work on model selection in contextual bandits, our algorithm
carefully adapts to the evolving bias-variance trade-off as more data is
collected. In particular, our algorithm and analysis go beyond adapting to the
complexity of the simplest realizable class and instead adapt to the complexity
of the simplest class whose estimation variance dominates the bias. For short
horizons, this provides improved regret guarantees that depend on the
complexity of simpler classes.
Related papers
- Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Cost-Effective Online Contextual Model Selection [14.094350329970537]
We formulate this task as an online contextual active model selection problem, where at each round the learner receives an unlabeled data point along with a context.
The goal is to output the best model for any given context without obtaining an excessive amount of labels.
We propose a contextual active model selection algorithm (CAMS), which relies on a novel uncertainty sampling query criterion defined on a given policy class for adaptive model selection.
arXiv Detail & Related papers (2022-07-13T08:22:22Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - Universal and data-adaptive algorithms for model selection in linear
contextual bandits [52.47796554359261]
We consider the simplest non-trivial instance of model-selection: distinguishing a simple multi-armed bandit problem from a linear contextual bandit problem.
We introduce new algorithms that explore in a data-adaptive manner and provide guarantees of the form $mathcalO(dalpha T1- alpha)$.
Our approach extends to model selection among nested linear contextual bandits under some additional assumptions.
arXiv Detail & Related papers (2021-11-08T18:05:35Z) - Near Instance Optimal Model Selection for Pure Exploration Linear
Bandits [20.67688737534517]
We study the model selection problem in the pure exploration linear bandit setting.
Our goal is to automatically adapt to the instance-dependent complexity measure of the smallest hypothesis class.
Our algorithms define a new optimization problem based on experimental design.
arXiv Detail & Related papers (2021-09-10T22:56:58Z) - Model Selection for Generic Contextual Bandits [20.207989166682832]
We propose a refinement based algorithm called Adaptive Contextual Bandit (ttfamily ACB)
We prove that this algorithm is adaptive, i.e., the regret rate order-wise matches that of any provable contextual bandit algorithm.
We also show that a much simpler explore-then-commit (ETC) style algorithm also obtains similar regret bound, despite not knowing the true model class.
arXiv Detail & Related papers (2021-07-07T19:35:31Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Pareto Optimal Model Selection in Linear Bandits [15.85873315624132]
We study a model selection problem in the linear bandit setting, where the learner must adapt to the dimension of the optimal hypothesis class on the fly.
In this paper, we first establish a lower bound showing that, even with a fixed action set, adaptation to the unknown intrinsic dimension $d_star$ comes at a cost.
arXiv Detail & Related papers (2021-02-12T16:02:06Z) - Open Problem: Model Selection for Contextual Bandits [82.57505650713496]
We ask whether similar guarantees are possible for contextual bandit learning.
In statistical learning, algorithms for model selection allow the learner to adapt to the complexity of the best hypothesis class.
arXiv Detail & Related papers (2020-06-19T03:00:01Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.