Catoni-Style Change Point Detection for Regret Minimization in Non-Stationary Heavy-Tailed Bandits
- URL: http://arxiv.org/abs/2505.20051v1
- Date: Mon, 26 May 2025 14:40:47 GMT
- Title: Catoni-Style Change Point Detection for Regret Minimization in Non-Stationary Heavy-Tailed Bandits
- Authors: Gianmarco Genalti, Sujay Bhatt, Nicola Gatti, Alberto Maria Metelli,
- Abstract summary: We tackle the heavy-tailed piecewise-stationary bandit problem.<n>We provide a novel Catoni-style change-point detection strategy tailored for heavy-tailed distributions.<n>We introduce Robust-CPD-UCB, which combines this change-point detection strategy with optimistic algorithms for bandits.
- Score: 31.212504858546232
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regret minimization in stochastic non-stationary bandits gained popularity over the last decade, as it can model a broad class of real-world problems, from advertising to recommendation systems. Existing literature relies on various assumptions about the reward-generating process, such as Bernoulli or subgaussian rewards. However, in settings such as finance and telecommunications, heavy-tailed distributions naturally arise. In this work, we tackle the heavy-tailed piecewise-stationary bandit problem. Heavy-tailed bandits, introduced by Bubeck et al., 2013, operate on the minimal assumption that the finite absolute centered moments of maximum order $1+\epsilon$ are uniformly bounded by a constant $v<+\infty$, for some $\epsilon \in (0,1]$. We focus on the most popular non-stationary bandit setting, i.e., the piecewise-stationary setting, in which the mean of reward-generating distributions may change at unknown time steps. We provide a novel Catoni-style change-point detection strategy tailored for heavy-tailed distributions that relies on recent advancements in the theory of sequential estimation, which is of independent interest. We introduce Robust-CPD-UCB, which combines this change-point detection strategy with optimistic algorithms for bandits, providing its regret upper bound and an impossibility result on the minimum attainable regret for any policy. Finally, we validate our approach through numerical experiments on synthetic and real-world datasets.
Related papers
- Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms [80.05851139852311]
We propose a novel policy to learn reward environments over a continuous space using Gaussian.<n>We show that our method efficiently learns continuous Lipschitz reward functions with $mathcalO*(sqrtT)$ cumulative regret. Furthermore, our method naturally extends to non-stationary problems with a simple modification.
arXiv Detail & Related papers (2025-05-30T15:15:18Z) - Tracking Most Significant Shifts in Infinite-Armed Bandits [3.591122855617648]
We study an infinite-armed bandit problem where actions' mean rewards are initially sampled from a reservoir distribution.<n>We show the first parameter-free optimal regret bounds for all regimes of regularity of the reservoir.<n>We show that tighter regret bounds in terms of significant shifts can be adaptively attained by employing a randomized variant of elimination.
arXiv Detail & Related papers (2025-01-31T19:00:21Z) - Variance-Dependent Regret Bounds for Non-stationary Linear Bandits [52.872628573907434]
We propose algorithms that utilize the variance of the reward distribution as well as the $B_K$, and show that they can achieve tighter regret upper bounds.
We introduce two novel algorithms: Restarted Weighted$textOFUL+$ and Restarted $textSAVE+$.
Notably, when the total variance $V_K$ is much smaller than $K$, our algorithms outperform previous state-of-the-art results on non-stationary linear bandits under different settings.
arXiv Detail & Related papers (2024-03-15T23:36:55Z) - $(\epsilon, u)$-Adaptive Regret Minimization in Heavy-Tailed Bandits [29.966828248335972]
We study the regret minimization problem when $epsilon$ and $u$ are unknown to the learner.
AdaR-UCB is the first algorithm that enjoys regret guarantees nearly matching those of the non-adaptive heavy-tailed case.
arXiv Detail & Related papers (2023-10-04T17:11:15Z) - Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
Contextual Bandits and Markov Decision Processes [59.61248760134937]
We propose an efficient algorithm to achieve a regret of $tildeO(sqrtT+zeta)$.
The proposed algorithm relies on the recently developed uncertainty-weighted least-squares regression from linear contextual bandit.
We generalize our algorithm to the episodic MDP setting and first achieve an additive dependence on the corruption level $zeta$.
arXiv Detail & Related papers (2022-12-12T15:04:56Z) - Regret Minimization in Heavy-Tailed Bandits [12.272975892517039]
We revisit the classic regret-minimization problem in the multi-armed bandit setting.
We propose an optimal algorithm that matches the lower bound exactly in the first-order term.
We show that our index concentrates faster than the well known truncated or trimmed empirical mean estimators.
arXiv Detail & Related papers (2021-02-07T07:46:24Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - Minimax Policy for Heavy-tailed Bandits [10.203602318836444]
We modify the minimax policy MOSS for the sub-Gaussian reward distribution by using saturated empirical mean to design a new algorithm called Robust MOSS.
We show that if the moment of order $1+epsilon$ for the reward distribution exists, then the refined strategy has a worst-case regret matching the lower bound while maintaining a distribution-dependent regret.
arXiv Detail & Related papers (2020-07-20T21:43:57Z) - Adaptive Discretization for Adversarial Lipschitz Bandits [85.39106976861702]
Lipschitz bandits is a prominent version of multi-armed bandits that studies large, structured action spaces.
A central theme here is the adaptive discretization of the action space, which gradually zooms in'' on the more promising regions.
We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds.
arXiv Detail & Related papers (2020-06-22T16:06:25Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.