Contextual Bandits with Stage-wise Constraints
- URL: http://arxiv.org/abs/2401.08016v1
- Date: Mon, 15 Jan 2024 23:58:21 GMT
- Title: Contextual Bandits with Stage-wise Constraints
- Authors: Aldo Pacchiano, Mohammad Ghavamzadeh, Peter Bartlett
- Abstract summary: We study contextual bandits in the presence of a stage-wise constraint (a constraint at each round)
We propose an upper-confidence bound algorithm for the problem and prove a $T$-round regret bound for it.
- Score: 46.412612374393895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study contextual bandits in the presence of a stage-wise constraint (a
constraint at each round), when the constraint must be satisfied both with high
probability and in expectation. Obviously the setting where the constraint is
in expectation is a relaxation of the one with high probability. We start with
the linear case where both the contextual bandit problem (reward function) and
the stage-wise constraint (cost function) are linear. In each of the high
probability and in expectation settings, we propose an upper-confidence bound
algorithm for the problem and prove a $T$-round regret bound for it. Our
algorithms balance exploration and constraint satisfaction using a novel idea
that scales the radii of the reward and cost confidence sets with different
scaling factors. We also prove a lower-bound for this constrained problem, show
how our algorithms and analyses can be extended to multiple constraints, and
provide simulations to validate our theoretical results. In the high
probability setting, we describe the minimum requirements for the action set in
order for our algorithm to be tractable. In the setting that the constraint is
in expectation, we further specialize our results to multi-armed bandits and
propose a computationally efficient algorithm for this setting with regret
analysis. Finally, we extend our results to the case where the reward and cost
functions are both non-linear. We propose an algorithm for this case and prove
a regret bound for it that characterize the function class complexity by the
eluder dimension.
Related papers
- Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints [8.784438985280094]
We study problems as pure exploration in multi-armed bandits with unknown linear constraints.
First, we propose a Lagrangian relaxation of the sample complexity lower bound for pure exploration under constraints.
Second, we leverage the Lagrangian lower bound and the properties of convex to propose two computationally efficient extensions of Track-and-Stop and Gamified Explorer, namely LATS and LAGEX.
arXiv Detail & Related papers (2024-10-24T15:26:14Z) - Beyond Primal-Dual Methods in Bandits with Stochastic and Adversarial Constraints [29.514323697659613]
We address a generalization of the bandit with knapsacks problem, where a learner aims to maximize rewards while satisfying an arbitrary set of long-term constraints.
Our goal is to design best-of-both-worlds algorithms that perform under both and adversarial constraints.
arXiv Detail & Related papers (2024-05-25T08:09:36Z) - Replicability is Asymptotically Free in Multi-armed Bandits [45.729105054410745]
This work is motivated by the growing demand for reproducible machine learning.
In particular, we consider a replicable algorithm that ensures, with high probability, that the algorithm's sequence of actions is not affected by the randomness inherent in the dataset.
arXiv Detail & Related papers (2024-02-12T03:31:34Z) - On Kernelized Multi-Armed Bandits with Constraints [16.102401271318012]
We study a bandit problem with a general unknown reward function and a general unknown constraint function.
We propose a general framework for both algorithm performance analysis.
We demonstrate the superior performance of our proposed algorithms via numerical experiments.
arXiv Detail & Related papers (2022-03-29T14:02:03Z) - Safe Online Bid Optimization with Return-On-Investment and Budget
Constraints subject to Uncertainty [87.81197574939355]
We study the nature of both the optimization and learning problems.
We provide an algorithm, namely GCB, guaranteeing sublinear regret at the cost of a potentially linear number of constraints violations.
More interestingly, we provide an algorithm, namely GCB_safe(psi,phi), guaranteeing both sublinear pseudo-regret and safety w.h.p. at the cost of accepting tolerances psi and phi.
arXiv Detail & Related papers (2022-01-18T17:24:20Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.