The Behavior and Convergence of Local Bayesian Optimization
- URL: http://arxiv.org/abs/2305.15572v3
- Date: Fri, 8 Mar 2024 21:39:18 GMT
- Title: The Behavior and Convergence of Local Bayesian Optimization
- Authors: Kaiwen Wu, Kyurae Kim, Roman Garnett and Jacob R. Gardner
- Abstract summary: Local optimization strategies can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.
We first study the behavior of the local approach, and find that the statistics of individual local solutions of Gaussian process sample paths are surprisingly good compared to what we would expect to recover from global methods.
- Score: 20.568490114736818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A recent development in Bayesian optimization is the use of local
optimization strategies, which can deliver strong empirical performance on
high-dimensional problems compared to traditional global strategies. The "folk
wisdom" in the literature is that the focus on local optimization sidesteps the
curse of dimensionality; however, little is known concretely about the expected
behavior or convergence of Bayesian local optimization routines. We first study
the behavior of the local approach, and find that the statistics of individual
local solutions of Gaussian process sample paths are surprisingly good compared
to what we would expect to recover from global methods. We then present the
first rigorous analysis of such a Bayesian local optimization algorithm
recently proposed by M\"uller et al. (2021), and derive convergence rates in
both the noisy and noiseless settings.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - High-dimensional Bayesian Optimization via Covariance Matrix Adaptation
Strategy [16.521207412129833]
We propose a novel technique for defining the local regions using the Covariance Matrix Adaptation (CMA) strategy.
Based on this search distribution, we then define the local regions consisting of data points with high probabilities of being the global optimum.
Our approach serves as a meta-algorithm as it can incorporate existing black-box BOs, such as BO, TuRBO, and BAxUS, to find the global optimum.
arXiv Detail & Related papers (2024-02-05T15:32:10Z) - LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions [0.0]
We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
arXiv Detail & Related papers (2023-11-19T13:56:24Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - Optimistic Optimization of Gaussian Process Samples [30.226274682578172]
A competing, computationally more efficient, global optimization framework is optimistic optimization, which exploits prior knowledge about the geometry of the search space in form of a dissimilarity function.
We argue that there is a new research domain between geometric and probabilistic search, i.e. methods that run drastically faster than traditional Bayesian optimization, while retaining some of the crucial functionality of Bayesian optimization.
arXiv Detail & Related papers (2022-09-02T09:06:24Z) - Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD
for Communication Efficient Nonconvex Distributed Learning [58.79085525115987]
Local methods are one of the promising approaches to reduce communication time.
We show that the communication complexity is better than non-local methods when the local datasets is smaller than the smoothness local loss.
arXiv Detail & Related papers (2022-02-12T15:12:17Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.