Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization
- URL: http://arxiv.org/abs/2203.07875v1
- Date: Tue, 15 Mar 2022 13:17:53 GMT
- Title: Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization
- Authors: Hung Tran-The and Sunil Gupta and Santu Rana and Svetha Venkatesh
- Abstract summary: The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
- Score: 63.8557841188626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The expected improvement (EI) algorithm is one of the most popular strategies
for optimization under uncertainty due to its simplicity and efficiency.
Despite its popularity, the theoretical aspects of this algorithm have not been
properly analyzed. In particular, whether in the noisy setting, the EI strategy
with a standard incumbent converges is still an open question of the Gaussian
process bandit optimization problem. We aim to answer this question by
proposing a variant of EI with a standard incumbent defined via the GP
predictive mean. We prove that our algorithm converges, and achieves a
cumulative regret bound of $\mathcal O(\gamma_T\sqrt{T})$, where $\gamma_T$ is
the maximum information gain between $T$ observations and the Gaussian process
model. Based on this variant of EI, we further propose an algorithm called
Improved GP-EI that converges faster than previous counterparts. In particular,
our proposed variants of EI do not require the knowledge of the RKHS norm and
the noise's sub-Gaussianity parameter as in previous works. Empirical
validation in our paper demonstrates the effectiveness of our algorithms
compared to several baselines.
Related papers
- Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - An Algebraically Converging Stochastic Gradient Descent Algorithm for
Global Optimization [14.336473214524663]
A key component in the algorithm is the randomness based on the value of the objective function.
We prove the convergence of the algorithm with an algebra and tuning in the parameter space.
We present several numerical examples to demonstrate the efficiency and robustness of the algorithm.
arXiv Detail & Related papers (2022-04-12T16:27:49Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - $\bar{G}_{mst}$:An Unbiased Stratified Statistic and a Fast Gradient
Optimization Algorithm Based on It [0.0]
A novel algorithm named MSSG designed based on $barG_mst$ outperforms other sgd-like algorithms.
arXiv Detail & Related papers (2021-10-07T11:48:55Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.