Inversion-based Latent Bayesian Optimization
- URL: http://arxiv.org/abs/2411.05330v1
- Date: Fri, 08 Nov 2024 05:06:47 GMT
- Title: Inversion-based Latent Bayesian Optimization
- Authors: Jaewon Chu, Jinyoung Park, Seunghun Lee, Hyunwoo J. Kim,
- Abstract summary: Inversion-based Latent Bayesian Optimization (InvBO) is a plug-and-play module for Latent Bayesian optimization.
InvBO consists of two components: an inversion method and a potential-aware trust region anchor selection.
Experimental results demonstrate the effectiveness of InvBO on nine real-world benchmarks.
- Score: 18.306286370684205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent Bayesian optimization (LBO) approaches have successfully adopted Bayesian optimization over a continuous latent space by employing an encoder-decoder architecture to address the challenge of optimization in a high dimensional or discrete input space. LBO learns a surrogate model to approximate the black-box objective function in the latent space. However, we observed that most LBO methods suffer from the `misalignment problem`, which is induced by the reconstruction error of the encoder-decoder architecture. It hinders learning an accurate surrogate model and generating high-quality solutions. In addition, several trust region-based LBO methods select the anchor, the center of the trust region, based solely on the objective function value without considering the trust region`s potential to enhance the optimization process. To address these issues, we propose Inversion-based Latent Bayesian Optimization (InvBO), a plug-and-play module for LBO. InvBO consists of two components: an inversion method and a potential-aware trust region anchor selection. The inversion method searches the latent code that completely reconstructs the given target data. The potential-aware trust region anchor selection considers the potential capability of the trust region for better local optimization. Experimental results demonstrate the effectiveness of InvBO on nine real-world benchmarks, such as molecule design and arithmetic expression fitting tasks. Code is available at https://github.com/mlvlab/InvBO.
Related papers
- Trust Region-Based Bayesian Optimisation to Discover Diverse Solutions [8.727449523567673]
We explore the effectiveness of trust region-based BO algorithms for diversity optimisation in different dimensional black box problems.<n>We propose diversity optimisation approaches extending TuRBO1, which is the first BO method that uses a trust region-based approach for scalability.<n>We evaluate proposed algorithms on benchmark functions with dimensions 2 to 20.
arXiv Detail & Related papers (2025-11-02T00:31:37Z) - Feasibility-Driven Trust Region Bayesian Optimization [0.048748194765816946]
FuRBO iteratively defines a trust region from which the next candidate solution is selected.<n>We empirically demonstrate the effectiveness of FuRBO through extensive testing on the full BBOB-constrained benchmark suite.
arXiv Detail & Related papers (2025-06-17T15:16:22Z) - Latent Bayesian Optimization via Autoregressive Normalizing Flows [17.063294409131238]
We propose a Normalizing Flow-based Bayesian Optimization (NF-BO) to solve the value discrepancy problem.
Our method demonstrates superior performance in molecule generation tasks, significantly outperforming both traditional and recent LBO approaches.
arXiv Detail & Related papers (2025-04-21T06:36:09Z) - Robust Bayesian Optimization via Localized Online Conformal Prediction [37.549297668783254]
We introduce localized online conformal prediction-based Bayesian optimization (LOCBO)
LOCBO calibrates the GP model through localized online conformal prediction (CP)
We provide theoretical performance guarantees for LOCBO's iterates that hold for the unobserved objective function.
arXiv Detail & Related papers (2024-11-26T12:45:54Z) - LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions [0.0]
We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
arXiv Detail & Related papers (2023-11-19T13:56:24Z) - Advancing Bayesian Optimization via Learning Correlated Latent Space [15.783344085533187]
We propose Correlated latent space Bayesian Optimization (CoBO), which focuses on learning correlated latent spaces.
Specifically, our method introduces Lipschitz regularization, loss weighting, and trust region recoordination to minimize the inherent gap around the promising areas.
We demonstrate the effectiveness of our approach on several optimization tasks in discrete data, such as molecule design and arithmetic expression fitting.
arXiv Detail & Related papers (2023-10-31T08:24:41Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Bayesian Optimization for Macro Placement [48.55456716632735]
We develop a novel approach to macro placement using Bayesian optimization (BO) over sequence pairs.
BO is a machine learning technique that uses a probabilistic surrogate model and an acquisition function.
We demonstrate our algorithm on the fixed-outline macro placement problem with the half-perimeter wire length objective.
arXiv Detail & Related papers (2022-07-18T06:17:06Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - Combining Latent Space and Structured Kernels for Bayesian Optimization
over Combinatorial Spaces [27.989924313988016]
We consider the problem of optimizing spaces (e.g., sequences, trees, and graphs) using expensive black-box function evaluations.
A recent BO approach for spaces is through a reduction to BO over continuous spaces by learning a latent representation of structures.
This paper proposes a principled approach referred as LADDER to overcome this drawback.
arXiv Detail & Related papers (2021-11-01T18:26:22Z) - BORE: Bayesian Optimization by Density-Ratio Estimation [34.22533785573784]
We cast the expected improvement (EI) function as a binary classification problem, building on the link between class-probability estimation and density-ratio estimation.
This reformulation provides numerous advantages, not least in terms of versatility, and scalability.
arXiv Detail & Related papers (2021-02-17T20:04:11Z) - TREGO: a Trust-Region Framework for Efficient Global Optimization [63.995130144110156]
We propose and analyze a trust-region-like EGO method (TREGO)
TREGO alternates between regular EGO steps and local steps within a trust region.
Our algorithm enjoys strong global convergence properties, while departing from EGO only for a subset of optimization steps.
arXiv Detail & Related papers (2021-01-18T00:14:40Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.