Lower Bounds for Time-Varying Kernelized Bandits
- URL: http://arxiv.org/abs/2410.16692v1
- Date: Tue, 22 Oct 2024 04:45:47 GMT
- Title: Lower Bounds for Time-Varying Kernelized Bandits
- Authors: Xu Cai, Jonathan Scarlett,
- Abstract summary: optimization of black-box functions with noisy observations is a fundamental problem with widespread applications.
We consider non-stationary scenarios, which are crucial for certain applications but are currently less well-understood.
Under $ell_infty$-norm variations, our bounds are found to be close to the state-of-the-art upper bound.
- Score: 43.62161925881489
- License:
- Abstract: The optimization of black-box functions with noisy observations is a fundamental problem with widespread applications, and has been widely studied under the assumption that the function lies in a reproducing kernel Hilbert space (RKHS). This problem has been studied extensively in the stationary setting, and near-optimal regret bounds are known via developments in both upper and lower bounds. In this paper, we consider non-stationary scenarios, which are crucial for certain applications but are currently less well-understood. Specifically, we provide the first algorithm-independent lower bounds, where the time variations are subject satisfying a total variation budget according to some function norm. Under $\ell_{\infty}$-norm variations, our bounds are found to be close to the state-of-the-art upper bound (Hong \emph{et al.}, 2023). Under RKHS norm variations, the upper and lower bounds are still reasonably close but with more of a gap, raising the interesting open question of whether non-minor improvements in the upper bound are possible.
Related papers
- Closing the Gap Between the Upper Bound and the Lower Bound of Adam's
Iteration Complexity [51.96093077151991]
We derive a new convergence guarantee of Adam, with only an $L$-smooth condition and a bounded noise variance assumption.
Our proof utilizes novel techniques to handle the entanglement between momentum and adaptive learning rate.
arXiv Detail & Related papers (2023-10-27T09:16:58Z) - Adaptation to Misspecified Kernel Regularity in Kernelised Bandits [27.912690223941386]
We study adaptivity to the regularity of translation-invariant kernels.
It is impossible to simultaneously achieve optimal cumulative regret in a pair of RKHSs with different regularities.
We connect the statistical difficulty for adaptivity in continuum-armed bandits in three fundamental types of function spaces.
arXiv Detail & Related papers (2023-04-26T21:12:45Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - On the Convergence of AdaGrad(Norm) on $\R^{d}$: Beyond Convexity,
Non-Asymptotic Rate and Acceleration [33.247600151322466]
We develop a deeper understanding of AdaGrad and its variants in the standard setting of smooth convex functions.
First, we demonstrate new techniques to explicitly bound the convergence rate of the vanilla AdaGrad for unconstrained problems.
Second, we propose a variant of AdaGrad for which we can show the convergence of the last iterate, instead of the average iterate.
arXiv Detail & Related papers (2022-09-29T14:44:40Z) - Sharp Bounds for Federated Averaging (Local SGD) and Continuous
Perspective [49.17352150219212]
Federated AveragingFedAvg, also known as Local SGD, is one of the most popular algorithms in Federated Learning (FL)
We show how to analyze this quantity from the Differential Equation (SDE) perspective.
arXiv Detail & Related papers (2021-11-05T22:16:11Z) - On Information Gain and Regret Bounds in Gaussian Process Bandits [8.499604625449907]
Consider the optimization of an expensive to evaluate and possibly non- sequential objective function $f$ from noisy feedback observations.
For the Matern family of kernels, lower bounds on $gamma_T$, and regret under the frequentist setting, are known.
arXiv Detail & Related papers (2020-09-15T10:15:29Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Regret Bounds for Noise-Free Kernel-Based Bandits [4.924126492174802]
Kernel-based bandit is an extensively studied black-box optimization problem.
We discuss several upper bounds on regret; none of which seem order optimal, and provide a conjecture on the order optimal regret bound.
arXiv Detail & Related papers (2020-02-12T17:06:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.