Unleashing the Potential of Acquisition Functions in High-Dimensional
Bayesian Optimization
- URL: http://arxiv.org/abs/2302.08298v2
- Date: Wed, 24 Jan 2024 02:58:56 GMT
- Title: Unleashing the Potential of Acquisition Functions in High-Dimensional
Bayesian Optimization
- Authors: Jiayu Zhao, Renyu Yang, Shenghao Qiu, Zheng Wang
- Abstract summary: Bayesian optimization is widely used to optimize expensive-to-evaluate black-box functions.
In high-dimensional problems, finding the global maximum of the acquisition function can be difficult.
We propose a better approach by employing multiple data points to leverage the historical capability of black-box optimization.
- Score: 5.349207553730357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) is widely used to optimize expensive-to-evaluate
black-box functions.BO first builds a surrogate model to represent the
objective function and assesses its uncertainty. It then decides where to
sample by maximizing an acquisition function (AF) based on the surrogate model.
However, when dealing with high-dimensional problems, finding the global
maximum of the AF becomes increasingly challenging. In such cases, the
initialization of the AF maximizer plays a pivotal role, as an inadequate setup
can severely hinder the effectiveness of the AF.
This paper investigates a largely understudied problem concerning the impact
of AF maximizer initialization on exploiting AFs' capability. Our large-scale
empirical study shows that the widely used random initialization strategy often
fails to harness the potential of an AF. In light of this, we propose a better
initialization approach by employing multiple heuristic optimizers to leverage
the historical data of black-box optimization to generate initial points for
the AF maximize. We evaluate our approach with a range of heavily studied
synthetic functions and real-world applications. Experimental results show that
our techniques, while simple, can significantly enhance the standard BO and
outperform state-of-the-art methods by a large margin in most test cases.
Related papers
- FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch [21.41322548859776]
We show how FunBO can be used to learn new acquisition functions written in computer code.
We show how FunBO identifies AFs that generalize well in and out of the training distribution of functions.
arXiv Detail & Related papers (2024-06-07T10:49:59Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - A General Framework for User-Guided Bayesian Optimization [51.96352579696041]
We propose ColaBO, the first Bayesian-principled framework for prior beliefs beyond the typical kernel structure.
We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
arXiv Detail & Related papers (2023-11-24T18:27:26Z) - Towards Automated Design of Bayesian Optimization via Exploratory
Landscape Analysis [11.143778114800272]
We show that a dynamic selection of the AF can benefit the BO design.
We pave a way towards AutoML-assisted, on-the-fly BO designs that adjust their behavior on a run-by-run basis.
arXiv Detail & Related papers (2022-11-17T17:15:04Z) - Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic
Reparameterization [29.178417789839102]
optimizing black-box functions of discrete (and potentially continuous) design parameters is a ubiquitous problem in scientific and engineering applications.
We propose using probabilistic re parameterization (PR) to maximize the expectation of the acquisition function (AF) over a probability distribution.
PR is complementary to (and benefits) recent work and naturally generalizes to settings with multiple objectives and black-box constraints.
arXiv Detail & Related papers (2022-10-18T22:41:00Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Robust Bayesian optimization with reinforcement learned acquisition
functions [4.05984965639419]
In Bayesian optimization, acquisition function (AF) guides sequential sampling and plays a pivotal role for efficient convergence to better optima.
To address the crux, the idea of data-driven AF selection is proposed.
The sequential AF selection task is formalized as a Markov decision process (MDP) and resort to powerful reinforcement learning (RL) technologies.
arXiv Detail & Related papers (2022-10-02T09:59:06Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.