Non-Stationary Bandits under Recharging Payoffs: Improved Planning with
Sublinear Regret
- URL: http://arxiv.org/abs/2205.14790v1
- Date: Sun, 29 May 2022 23:55:36 GMT
- Title: Non-Stationary Bandits under Recharging Payoffs: Improved Planning with
Sublinear Regret
- Authors: Orestis Papadigenopoulos, Constantine Caramanis, Sanjay Shakkottai
- Abstract summary: The multi-armed bandit setting has been recently studied in the non-stationary regime.
Mean payoff of each action is a non-decreasing function of the number of rounds passed since it was last played.
We show how our algorithm can be transformed into a bandit algorithm with sublinear regret.
- Score: 34.44347218903429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The stochastic multi-armed bandit setting has been recently studied in the
non-stationary regime, where the mean payoff of each action is a non-decreasing
function of the number of rounds passed since it was last played. This model
captures natural behavioral aspects of the users which crucially determine the
performance of recommendation platforms, ad placement systems, and more. Even
assuming prior knowledge of the mean payoff functions, computing an optimal
planning in the above model is NP-hard, while the state-of-the-art is a
$1/4$-approximation algorithm for the case where at most one arm can be played
per round. We first focus on the setting where the mean payoff functions are
known. In this setting, we significantly improve the best-known guarantees for
the planning problem by developing a polynomial-time
$(1-{1}/{e})$-approximation algorithm (asymptotically and in expectation),
based on a novel combination of randomized LP rounding and a time-correlated
(interleaved) scheduling method. Furthermore, our algorithm achieves improved
guarantees -- compared to prior work -- for the case where more than one arm
can be played at each round. Moving to the bandit setting, when the mean payoff
functions are initially unknown, we show how our algorithm can be transformed
into a bandit algorithm with sublinear regret.
Related papers
- Near-Optimal Algorithm for Non-Stationary Kernelized Bandits [6.379833644595456]
We study a non-stationary kernelized bandit (KB) problem, also called time-varying Bayesian optimization.
We show the first algorithm-independent regret lower bound for non-stationary KB with squared exponential and Mat'ern kernels.
We propose a novel near-optimal algorithm called restarting phased elimination with random permutation.
arXiv Detail & Related papers (2024-10-21T14:28:26Z) - Variance-Dependent Regret Bounds for Non-stationary Linear Bandits [52.872628573907434]
We propose algorithms that utilize the variance of the reward distribution as well as the $B_K$, and show that they can achieve tighter regret upper bounds.
We introduce two novel algorithms: Restarted Weighted$textOFUL+$ and Restarted $textSAVE+$.
Notably, when the total variance $V_K$ is much smaller than $K$, our algorithms outperform previous state-of-the-art results on non-stationary linear bandits under different settings.
arXiv Detail & Related papers (2024-03-15T23:36:55Z) - Replicability is Asymptotically Free in Multi-armed Bandits [45.729105054410745]
This work is motivated by the growing demand for reproducible machine learning.
In particular, we consider a replicable algorithm that ensures, with high probability, that the algorithm's sequence of actions is not affected by the randomness inherent in the dataset.
arXiv Detail & Related papers (2024-02-12T03:31:34Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
Contextual Bandits and Markov Decision Processes [59.61248760134937]
We propose an efficient algorithm to achieve a regret of $tildeO(sqrtT+zeta)$.
The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit.
We generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $zeta$.
arXiv Detail & Related papers (2022-12-12T15:04:56Z) - A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian
Process Bandits [118.22458816174144]
We propose a novel robust elimination-type algorithm that runs in epochs, combines exploration with infrequent switching to select a small subset of actions, and plays each action for multiple time instants.
Our algorithm, GP Robust Phased Elimination (RGP-PE), successfully balances robustness to corruptions with exploration and exploitation.
We perform the first empirical study of robustness in the corrupted GP bandit setting, and show that our algorithm is robust against a variety of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T21:19:36Z) - Corralling Stochastic Bandit Algorithms [54.10645564702416]
We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward.
We show that the gap between the highest reward and other rewards depends on the gap between the highest reward and other rewards.
arXiv Detail & Related papers (2020-06-16T15:33:12Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z) - Bandit algorithms to emulate human decision making using probabilistic
distortions [20.422725678982726]
We formulate two sample multi-armed bandit problems with distorted probabilities on the reward distributions.
We consider the aforementioned problems in the regret minimization as well as best arm identification framework for multi-armed bandits.
arXiv Detail & Related papers (2016-11-30T17:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.