Ballooning Multi-Armed Bandits
- URL: http://arxiv.org/abs/2001.10055v3
- Date: Mon, 22 Feb 2021 12:33:14 GMT
- Title: Ballooning Multi-Armed Bandits
- Authors: Ganesh Ghalme, Swapnil Dhamal, Shweta Jain, Sujit Gujar, Y. Narahari
- Abstract summary: We introduce Ballooning Multi-Armed Bandits (BL-MAB)
In BL-MAB, the set of available arms grows (or balloons) over time.
We show that if the best arm is equally likely to arrive at any time instant, a sublinear regret cannot be achieved.
- Score: 12.205797997133397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce Ballooning Multi-Armed Bandits (BL-MAB), a novel
extension of the classical stochastic MAB model. In the BL-MAB model, the set
of available arms grows (or balloons) over time. In contrast to the classical
MAB setting where the regret is computed with respect to the best arm overall,
the regret in a BL-MAB setting is computed with respect to the best available
arm at each time. We first observe that the existing stochastic MAB algorithms
result in linear regret for the BL-MAB model. We prove that, if the best arm is
equally likely to arrive at any time instant, a sub-linear regret cannot be
achieved. Next, we show that if the best arm is more likely to arrive in the
early rounds, one can achieve sub-linear regret. Our proposed algorithm
determines (1) the fraction of the time horizon for which the newly arriving
arms should be explored and (2) the sequence of arm pulls in the exploitation
phase from among the explored arms. Making reasonable assumptions on the
arrival distribution of the best arm in terms of the thinness of the
distribution's tail, we prove that the proposed algorithm achieves sub-linear
instance-independent regret. We further quantify explicit dependence of regret
on the arrival distribution parameters. We reinforce our theoretical findings
with extensive simulation results. We conclude by showing that our algorithm
would achieve sub-linear regret even if (a) the distributional parameters are
not exactly known, but are obtained using a reasonable learning mechanism or
(b) the best arm is not more likely to arrive early, but a large fraction of
arms is likely to arrive relatively early.
Related papers
- Best Arm Identification with Minimal Regret [55.831935724659175]
Best arm identification problem elegantly amalgamates regret minimization and BAI.
Agent's goal is to identify the best arm with a prescribed confidence level.
Double KL-UCB algorithm achieves optimality as the confidence level tends to zero.
arXiv Detail & Related papers (2024-09-27T16:46:02Z) - Optimal Best Arm Identification with Fixed Confidence in Restless Bandits [66.700654953613]
We study best arm identification in a restless multi-armed bandit setting with finitely many arms.
The discrete-time data generated by each arm forms a homogeneous Markov chain taking values in a common, finite state space.
It is demonstrated that tracking the long-term behavior of a certain Markov decision process and its state-action visitation proportions are the key ingredients in analyzing the converse and achievability bounds.
arXiv Detail & Related papers (2023-10-20T10:04:05Z) - Best Arm Identification in Bandits with Limited Precision Sampling [14.011731120150124]
We study best arm identification in a variant of the multi-armed bandit problem where the learner has limited precision in arm selection.
We propose a modified tracking-based algorithm to handle non-unique optimal allocations.
arXiv Detail & Related papers (2023-05-10T12:07:48Z) - Best Arm Identification in Restless Markov Multi-Armed Bandits [85.55466536537293]
We study the problem of identifying the best arm in a multi-armed bandit environment.
A decision entity wishes to find the index of the best arm as quickly as possible, subject to an upper bound error probability.
We show that this policy achieves an upper bound that depends on $R$ and is monotonically non-increasing as $Rtoinfty$.
arXiv Detail & Related papers (2022-03-29T04:58:04Z) - Best Arm Identification under Additive Transfer Bandits [49.69203462561861]
We study the setting when, despite the means being unknown, there is a known additive relationship between the source and target MAB instances.
We propose and theoretically analyze an LUCB-style algorithm to identify an $epsilon$-optimal target arm with high probability.
arXiv Detail & Related papers (2021-12-08T02:20:18Z) - The Countable-armed Bandit with Vanishing Arms [8.099977107670918]
We consider a bandit problem with countably many arms partitioned into finitely many "types"
A "non-stationary" distribution governs the relative abundance of each arm-type in the population of arms, aka the "arm-reservoir"
arXiv Detail & Related papers (2021-10-23T02:47:55Z) - Achieving the Pareto Frontier of Regret Minimization and Best Arm
Identification in Multi-Armed Bandits [91.8283876874947]
We design and analyze the BoBW-lil'UCB$(gamma)$ algorithm.
We show that (i) no algorithm can simultaneously perform optimally for both the RM and BAI objectives.
We also show that BoBW-lil'UCB$(gamma)$ outperforms a competitor in terms of the time complexity and the regret.
arXiv Detail & Related papers (2021-10-16T17:52:32Z) - Combinatorial Blocking Bandits with Stochastic Delays [33.65025386998747]
Recent work has considered natural variations of the multi-armed bandit problem, where the reward of each arm is a special function of the time passed since its last pulling.
In this work, we extend the above model in two directions: (i) We consider the general setting where more than one arms can be played at each round, subject to feasibility constraints.
We provide a tight analysis of the approximation of a natural greedy subset that always plays the maximum expected reward feasible among the available (non-blocked) arms.
When the arms' expected rewards are unknown, we adapt the above algorithm into a bandit, based on
arXiv Detail & Related papers (2021-05-22T02:46:04Z) - Optimal Best-arm Identification in Linear Bandits [79.3239137440876]
We devise a simple algorithm whose sampling complexity matches known instance-specific lower bounds.
Unlike existing best-arm identification strategies, our algorithm uses a stopping rule that does not depend on the number of arms.
arXiv Detail & Related papers (2020-06-29T14:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.