Constrained regret minimization for multi-criterion multi-armed bandits
- URL: http://arxiv.org/abs/2006.09649v1
- Date: Wed, 17 Jun 2020 04:23:18 GMT
- Title: Constrained regret minimization for multi-criterion multi-armed bandits
- Authors: Anmol Kagrecha, Jayakrishnan Nair, Krishna Jagannathan
- Abstract summary: We study the problem of regret minimization over a given time horizon, subject to a risk constraint.
We propose a Risk-Constrained Lower Confidence Bound algorithm that guarantees logarithmic regret.
We prove lower bounds on the performance of any risk-constrained regret minimization algorithm.
- Score: 5.349852254138086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a stochastic multi-armed bandit setting and study the problem of
regret minimization over a given time horizon, subject to a risk constraint.
Each arm is associated with an unknown cost/loss distribution. The learning
agent is characterized by a risk-appetite that she is willing to tolerate,
which we model using a pre-specified upper bound on the Conditional Value at
Risk (CVaR). An optimal arm is one that minimizes the expected loss, among
those arms that satisfy the CVaR constraint. The agent is interested in
minimizing the number of pulls of suboptimal arms, including the ones that are
'too risky.' For this problem, we propose a Risk-Constrained Lower Confidence
Bound (RC-LCB) algorithm, that guarantees logarithmic regret, i.e., the average
number of plays of all non-optimal arms is at most logarithmic in the horizon.
The algorithm also outputs a boolean flag that correctly identifies with high
probability, whether the given instance was feasible/infeasible with respect to
the risk constraint. We prove lower bounds on the performance of any
risk-constrained regret minimization algorithm and establish a fundamental
trade-off between regret minimization and feasibility identification. The
proposed algorithm and analyses can be readily generalized to solve constrained
multi-criterion optimization problems in the bandits setting.
Related papers
- Planning and Learning in Risk-Aware Restless Multi-Arm Bandit Problem [4.178382980763478]
In restless multi-arm bandits, a central agent is tasked with optimally distributing limited resources across several bandits (arms)
In this work, we generalize the traditional restless multi-arm bandit problem with a risk-neutral objective by incorporating risk-awareness.
We establish indexability conditions for the case of a risk-aware objective and provide a solution based on Whittle index.
arXiv Detail & Related papers (2024-10-30T13:59:30Z) - Best Arm Identification with Minimal Regret [55.831935724659175]
Best arm identification problem elegantly amalgamates regret minimization and BAI.
Agent's goal is to identify the best arm with a prescribed confidence level.
Double KL-UCB algorithm achieves optimality as the confidence level tends to zero.
arXiv Detail & Related papers (2024-09-27T16:46:02Z) - Minimax Linear Regression under the Quantile Risk [31.277788690403522]
We study the problem of designing minimax procedures in linear regression under the quantile risk.
We prove a matching upper bound on the worst-case quantile risk of a variant of the recently proposed min-max regression procedure.
arXiv Detail & Related papers (2024-06-17T23:24:14Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Proportional Response: Contextual Bandits for Simple and Cumulative
Regret Minimization [29.579719765255927]
We propose a new family of efficient bandit algorithms for the contextual bandit setting.
Our algorithms work with any function class, are robust to model misspecification, and can be used in continuous arm settings.
arXiv Detail & Related papers (2023-07-05T08:34:54Z) - Risk-aware linear bandits with convex loss [0.0]
We propose an optimistic UCB algorithm to learn optimal risk-aware actions, with regret guarantees similar to those of generalized linear bandits.
This approach requires solving a convex problem at each round of the algorithm, which we can relax by allowing only approximated solution obtained by online gradient descent.
arXiv Detail & Related papers (2022-09-15T09:09:53Z) - Safe Online Bid Optimization with Return-On-Investment and Budget
Constraints subject to Uncertainty [87.81197574939355]
We study the nature of both the optimization and learning problems.
We provide an algorithm, namely GCB, guaranteeing sublinear regret at the cost of a potentially linear number of constraints violations.
More interestingly, we provide an algorithm, namely GCB_safe(psi,phi), guaranteeing both sublinear pseudo-regret and safety w.h.p. at the cost of accepting tolerances psi and phi.
arXiv Detail & Related papers (2022-01-18T17:24:20Z) - A Unifying Theory of Thompson Sampling for Continuous Risk-Averse
Bandits [91.3755431537592]
This paper unifies the analysis of risk-averse Thompson sampling algorithms for the multi-armed bandit problem.
Using the contraction principle in the theory of large deviations, we prove novel concentration bounds for continuous risk functionals.
We show that a wide class of risk functionals as well as "nice" functions of them satisfy the continuity condition.
arXiv Detail & Related papers (2021-08-25T17:09:01Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Thompson Sampling Algorithms for Mean-Variance Bandits [97.43678751629189]
We develop Thompson Sampling-style algorithms for mean-variance MAB.
We also provide comprehensive regret analyses for Gaussian and Bernoulli bandits.
Our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.
arXiv Detail & Related papers (2020-02-01T15:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.