Regret Optimality of GP-UCB
- URL: http://arxiv.org/abs/2312.01386v1
- Date: Sun, 3 Dec 2023 13:20:08 GMT
- Title: Regret Optimality of GP-UCB
- Authors: Wenjia Wang and Xiaowei Zhang and Lu Zou
- Abstract summary: Gaussian Process Upper Confidence Bound (GP-UCB) is one of the most popular methods for optimizing black-box functions with noisy observations.
We establish new upper bounds on both the simple and cumulative regret of GP-UCB when the objective function to optimize admits certain smoothness property.
With the same level of exploration, GP-UCB can simultaneously achieve optimality in both simple and cumulative regret.
- Score: 12.323109084902228
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gaussian Process Upper Confidence Bound (GP-UCB) is one of the most popular
methods for optimizing black-box functions with noisy observations, due to its
simple structure and superior performance. Its empirical successes lead to a
natural, yet unresolved question: Is GP-UCB regret optimal? In this paper, we
offer the first generally affirmative answer to this important open question in
the Bayesian optimization literature. We establish new upper bounds on both the
simple and cumulative regret of GP-UCB when the objective function to optimize
admits certain smoothness property. These upper bounds match the known minimax
lower bounds (up to logarithmic factors independent of the feasible region's
dimensionality) for optimizing functions with the same smoothness.
Intriguingly, our findings indicate that, with the same level of exploration,
GP-UCB can simultaneously achieve optimality in both simple and cumulative
regret. The crux of our analysis hinges on a refined uniform error bound for
online estimation of functions in reproducing kernel Hilbert spaces. This error
bound, which we derive from empirical process theory, is of independent
interest, and its potential applications may reach beyond the scope of this
study.
Related papers
- Regret Analysis for Randomized Gaussian Process Upper Confidence Bound [9.967062483758632]
This paper analyzes the randomized variant of GP-UCB called improved randomized GP-UCB.
In both regret analyses, IRGP-UCB achieves a sub-linear regret upper bound without increasing the confidence parameter if the input domain is finite.
arXiv Detail & Related papers (2024-09-02T06:49:29Z) - Data-driven optimal stopping: A pure exploration analysis [1.0742675209112622]
Minimax optimality is verified by completing the upper bound results with matching lower bounds on the simple regret.
We investigate how our results on the simple regret transfer to the cumulative regret for a specific exploration-exploitation strategy.
arXiv Detail & Related papers (2023-12-10T13:16:01Z) - On the Sublinear Regret of GP-UCB [58.25014663727544]
We show that the Gaussian Process Upper Confidence Bound (GP-UCB) algorithm enjoys nearly optimal regret rates.
Our improvements rely on a key technical contribution -- regularizing kernel ridge estimators in proportion to the smoothness of the underlying kernel.
arXiv Detail & Related papers (2023-07-14T13:56:11Z) - No-Regret Constrained Bayesian Optimization of Noisy and Expensive
Hybrid Models using Differentiable Quantile Function Approximations [0.0]
Constrained Upper Quantile Bound (CUQB) is a conceptually simple, deterministic approach that avoids constraint approximations.
We show that CUQB significantly outperforms traditional Bayesian optimization in both constrained and unconstrained cases.
arXiv Detail & Related papers (2023-05-05T19:57:36Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Optimal Order Simple Regret for Gaussian Process Bandits [6.84224661918159]
We show that a pure exploration algorithm is significantly tighter than the existing bounds.
We show that this regret is optimal up to logarithmically for cases where a lower bound on a kernel is known.
arXiv Detail & Related papers (2021-08-20T16:49:32Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.