Restless Bandits with Many Arms: Beating the Central Limit Theorem
- URL: http://arxiv.org/abs/2107.11911v1
- Date: Sun, 25 Jul 2021 23:27:12 GMT
- Title: Restless Bandits with Many Arms: Beating the Central Limit Theorem
- Authors: Xiangyu Zhang, Peter I. Frazier
- Abstract summary: finite-horizon restless bandits with multiple pulls per period play an important role in recommender systems, active learning, revenue management, and many other areas.
While an optimal policy can be computed, in principle, using dynamic programming, the computation required scales exponentially in the number of arms $N$.
We characterize a non-degeneracy condition and a class of novel practically-computable policies, called fluid-priority policies, in which the optimality gap is $O(1)$.
- Score: 25.639496138046546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider finite-horizon restless bandits with multiple pulls per period,
which play an important role in recommender systems, active learning, revenue
management, and many other areas. While an optimal policy can be computed, in
principle, using dynamic programming, the computation required scales
exponentially in the number of arms $N$. Thus, there is substantial value in
understanding the performance of index policies and other policies that can be
computed efficiently for large $N$. We study the growth of the optimality gap,
i.e., the loss in expected performance compared to an optimal policy, for such
policies in a classical asymptotic regime proposed by Whittle in which $N$
grows while holding constant the fraction of arms that can be pulled per
period. Intuition from the Central Limit Theorem and the tightest previous
theoretical bounds suggest that this optimality gap should grow like
$O(\sqrt{N})$. Surprisingly, we show that it is possible to outperform this
bound. We characterize a non-degeneracy condition and a wide class of novel
practically-computable policies, called fluid-priority policies, in which the
optimality gap is $O(1)$. These include most widely-used index policies. When
this non-degeneracy condition does not hold, we show that fluid-priority
policies nevertheless have an optimality gap that is $O(\sqrt{N})$,
significantly generalizing the class of policies for which convergence rates
are known. We demonstrate that fluid-priority policies offer state-of-the-art
performance on a collection of restless bandit problems in numerical
experiments.
Related papers
- Achieving Exponential Asymptotic Optimality in Average-Reward Restless Bandits without Global Attractor Assumption [11.41663079285674]
We propose a novel emphtwo-set policy that maintains two dynamic subsets of arms.
We show that our two-set policy is optimalally with an $O(exp(-C N)$ optimality gap for an $N$-armed problem.
arXiv Detail & Related papers (2024-05-28T07:08:29Z) - Oracle-Efficient Reinforcement Learning for Max Value Ensembles [7.404901768256101]
Reinforcement learning (RL) in large or infinite state spaces is notoriously challenging, theoretically and experimentally.
In this work we aim to compete with the $textitmax-following policy$, which at each state follows the action of whichever constituent policy has the highest value.
Our main result is an efficient algorithm that learns to compete with the max-following policy, given only access to the constituent policies.
arXiv Detail & Related papers (2024-05-27T01:08:23Z) - Unichain and Aperiodicity are Sufficient for Asymptotic Optimality of Average-Reward Restless Bandits [11.41663079285674]
We show that our policies are optimal with an $O(1/sqrtN)$ optimality gap for an $N$-armed problem.
Our approach departs from most existing work that focuses on index or priority policies.
arXiv Detail & Related papers (2024-02-08T14:07:20Z) - Theoretical guarantees on the best-of-n alignment policy [110.21094183592358]
We show that the KL divergence between the best-of-$n$ policy and the base policy is equal to $log (n) - (n-1)/n.$
We propose a new estimator for the KL divergence and empirically show that it provides a tight approximation through a few examples.
arXiv Detail & Related papers (2024-01-03T18:39:13Z) - Thompson Exploration with Best Challenger Rule in Best Arm
Identification [66.33448474838342]
We study the fixed-confidence best arm identification problem in the bandit framework.
We propose a novel policy that combines Thompson sampling with a computationally efficient approach known as the best challenger rule.
arXiv Detail & Related papers (2023-10-01T01:37:02Z) - Restless Bandits with Average Reward: Breaking the Uniform Global
Attractor Assumption [12.471848976031904]
A fundamental goal is to efficiently compute policies that achieve a diminishing optimality gap as the number of arms, $N$, grows large.
Existing results on optimality all rely on the uniform global attractor property (UGAP), a complex and challenging-to-verify assumption.
We propose a general, simulation-based framework, that converts any single-armed policy into a policy for the original $N$-armed problem.
arXiv Detail & Related papers (2023-05-31T21:26:43Z) - Estimating Optimal Policy Value in General Linear Contextual Bandits [50.008542459050155]
In many bandit problems, the maximal reward achievable by a policy is often unknown in advance.
We consider the problem of estimating the optimal policy value in the sublinear data regime before the optimal policy is even learnable.
We present a more practical, computationally efficient algorithm that estimates a problem-dependent upper bound on $V*$.
arXiv Detail & Related papers (2023-02-19T01:09:24Z) - Near-optimality for infinite-horizon restless bandits with many arms [19.12759228067286]
Restless bandits are problems with applications in recommender systems, active learning, revenue management and other areas.
We derive a class of policies, called fluid-balance policies, that have a $O(sqrtN)$ optimality gap.
We also demonstrate empirically that fluid-balance policies provide state-of-the-art performance on specific problems.
arXiv Detail & Related papers (2022-03-29T18:49:21Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Softmax Policy Gradient Methods Can Take Exponential Time to Converge [60.98700344526674]
The softmax policy gradient (PG) method is arguably one of the de facto implementations of policy optimization in modern reinforcement learning.
We demonstrate that softmax PG methods can take exponential time -- in terms of $mathcalS|$ and $frac11-gamma$ -- to converge.
arXiv Detail & Related papers (2021-02-22T18:56:26Z) - Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds
Globally Optimal Policy [95.98698822755227]
We make the first attempt to study risk-sensitive deep reinforcement learning under the average reward setting with the variance risk criteria.
We propose an actor-critic algorithm that iteratively and efficiently updates the policy, the Lagrange multiplier, and the Fenchel dual variable.
arXiv Detail & Related papers (2020-12-28T05:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.