Local Latent Space Bayesian Optimization over Structured Inputs
- URL: http://arxiv.org/abs/2201.11872v1
- Date: Fri, 28 Jan 2022 00:55:58 GMT
- Title: Local Latent Space Bayesian Optimization over Structured Inputs
- Authors: Natalie Maus, Haydn T. Jones, Juston S. Moore, Matt J. Kusner, John
Bradshaw, Jacob R. Gardner
- Abstract summary: We propose LOL-BO, which adapts the notion of trust regions explored in recent work on high-dimensional Bayesian optimization to the structured setting.
LOL-BO achieves as much as 20 times improvement over state-of-the-art latent space Bayesian optimization methods across six real-world benchmarks.
- Score: 23.173329381303887
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Bayesian optimization over the latent spaces of deep autoencoder models
(DAEs) has recently emerged as a promising new approach for optimizing
challenging black-box functions over structured, discrete, hard-to-enumerate
search spaces (e.g., molecules). Here the DAE dramatically simplifies the
search space by mapping inputs into a continuous latent space where familiar
Bayesian optimization tools can be more readily applied. Despite this
simplification, the latent space typically remains high-dimensional. Thus, even
with a well-suited latent space, these approaches do not necessarily provide a
complete solution, but may rather shift the structured optimization problem to
a high-dimensional one. In this paper, we propose LOL-BO, which adapts the
notion of trust regions explored in recent work on high-dimensional Bayesian
optimization to the structured setting. By reformulating the encoder to
function as both an encoder for the DAE globally and as a deep kernel for the
surrogate model within a trust region, we better align the notion of local
optimization in the latent space with local optimization in the input space.
LOL-BO achieves as much as 20 times improvement over state-of-the-art latent
space Bayesian optimization methods across six real-world benchmarks,
demonstrating that improvement in optimization strategies is as important as
developing better DAE models.
Related papers
- Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Advancing Bayesian Optimization via Learning Correlated Latent Space [15.783344085533187]
We propose Correlated latent space Bayesian Optimization (CoBO), which focuses on learning correlated latent spaces.
Specifically, our method introduces Lipschitz regularization, loss weighting, and trust region recoordination to minimize the inherent gap around the promising areas.
We demonstrate the effectiveness of our approach on several optimization tasks in discrete data, such as molecule design and arithmetic expression fitting.
arXiv Detail & Related papers (2023-10-31T08:24:41Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Combining Latent Space and Structured Kernels for Bayesian Optimization
over Combinatorial Spaces [27.989924313988016]
We consider the problem of optimizing spaces (e.g., sequences, trees, and graphs) using expensive black-box function evaluations.
A recent BO approach for spaces is through a reduction to BO over continuous spaces by learning a latent representation of structures.
This paper proposes a principled approach referred as LADDER to overcome this drawback.
arXiv Detail & Related papers (2021-11-01T18:26:22Z) - LinEasyBO: Scalable Bayesian Optimization Approach for Analog Circuit
Synthesis via One-Dimensional Subspaces [11.64233949999656]
We propose a fast and robust Bayesian optimization approach via one-dimensional subspaces for analog circuit synthesis.
Our proposed algorithm can accelerate the optimization procedure by up to 9x and 38x compared to LP-EI and REMBOpBO respectively when the batch size is 15.
arXiv Detail & Related papers (2021-09-01T21:25:25Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z) - Good practices for Bayesian Optimization of high dimensional structured
spaces [15.488642552157131]
We study the effect of different search space design choices for performing Bayesian Optimization in high dimensional structured datasets.
We evaluate new methods to automatically define the optimization bounds in the latent space.
We provide recommendations for the practitioners.
arXiv Detail & Related papers (2020-12-31T07:00:39Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.