Re-Examining Linear Embeddings for High-Dimensional Bayesian
Optimization
- URL: http://arxiv.org/abs/2001.11659v2
- Date: Thu, 22 Oct 2020 18:30:08 GMT
- Title: Re-Examining Linear Embeddings for High-Dimensional Bayesian
Optimization
- Authors: Benjamin Letham, Roberto Calandra, Akshara Rai, Eytan Bakshy
- Abstract summary: We identify several crucial issues and misconceptions about the use of linear embeddings for BO.
We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO.
- Score: 20.511115436145467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) is a popular approach to optimize
expensive-to-evaluate black-box functions. A significant challenge in BO is to
scale to high-dimensional parameter spaces while retaining sample efficiency. A
solution considered in existing literature is to embed the high-dimensional
space in a lower-dimensional manifold, often via a random linear embedding. In
this paper, we identify several crucial issues and misconceptions about the use
of linear embeddings for BO. We study the properties of linear embeddings from
the literature and show that some of the design choices in current approaches
adversely impact their performance. We show empirically that properly
addressing these issues significantly improves the efficacy of linear
embeddings for BO on a range of problems, including learning a gait policy for
robot locomotion.
Related papers
- High-Dimensional Bayesian Optimization via Random Projection of Manifold Subspaces [0.0]
A common framework to tackle this problem is to assume that the objective function depends on a limited set of features that lie on a low-dimensional manifold embedded in the high-dimensional ambient space.
This paper proposes a new approach for BO in high dimensions by exploiting a new representation of the objective function.
Our approach enables efficient optimizing of BO's acquisition function in the low-dimensional space, with the advantage of projecting back to the original high-dimensional space compared to existing works in the same setting.
arXiv Detail & Related papers (2024-12-21T09:41:24Z) - BOIDS: High-dimensional Bayesian Optimization via Incumbent-guided Direction Lines and Subspace Embeddings [14.558601519561721]
We introduce BOIDS, a novel high-dimensional BO algorithm that guides optimization by a sequence of one-dimensional direction lines.
We also propose an adaptive selection technique to identify most optimal lines for each round of line-based optimization.
Our experimental results show that BOIDS outperforms state-of-the-art baselines on various synthetic and real-world benchmark problems.
arXiv Detail & Related papers (2024-12-17T13:51:24Z) - Offline Stochastic Optimization of Black-Box Objective Functions [47.74033738624514]
It is essential to leverage existing data to avoid costly active queries of complex black-box functions.
We introduce Offline BBO (SOBBO), which tackles both black-box objectives and uncontrolled uncertainties.
Numerical experiments demonstrate the effectiveness of our approach on both synthetic and real-world tasks.
arXiv Detail & Related papers (2024-12-03T02:20:30Z) - Large Language Models to Enhance Bayesian Optimization [57.474613739645605]
We present LLAMBO, a novel approach that integrates the capabilities of Large Language Models (LLM) within Bayesian optimization.
At a high level, we frame the BO problem in natural language, enabling LLMs to iteratively propose and evaluate promising solutions conditioned on historical evaluations.
Our findings illustrate that LLAMBO is effective at zero-shot warmstarting, and enhances surrogate modeling and candidate sampling, especially in the early stages of search when observations are sparse.
arXiv Detail & Related papers (2024-02-06T11:44:06Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Design-Bench: Benchmarks for Data-Driven Offline Model-Based
Optimization [82.02008764719896]
Black-box model-based optimization problems are ubiquitous in a wide range of domains, such as the design of proteins, DNA sequences, aircraft, and robots.
We present Design-Bench, a benchmark for offline MBO with a unified evaluation protocol and reference implementations of recent methods.
Our benchmark includes a suite of diverse and realistic tasks derived from real-world optimization problems in biology, materials science, and robotics.
arXiv Detail & Related papers (2022-02-17T05:33:27Z) - Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection [0.5439020425818999]
We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
arXiv Detail & Related papers (2021-09-20T01:55:43Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - High-Dimensional Bayesian Optimization with Sparse Axis-Aligned
Subspaces [14.03847432040056]
We argue that a surrogate model defined on sparse axis-aligned subspaces offer an attractive compromise between flexibility and parsimony.
We demonstrate that our approach, which relies on Hamiltonian Monte Carlo for inference, can rapidly identify sparse subspaces relevant to modeling the unknown objective function.
arXiv Detail & Related papers (2021-02-27T23:06:24Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.