Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection
- URL: http://arxiv.org/abs/2109.09264v2
- Date: Mon, 12 Feb 2024 17:22:28 GMT
- Title: Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection
- Authors: Yihang Shen and Carl Kingsford
- Abstract summary: We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
- Score: 0.5439020425818999
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bayesian Optimization (BO) is a method for globally optimizing black-box
functions. While BO has been successfully applied to many scenarios, developing
effective BO algorithms that scale to functions with high-dimensional domains
is still a challenge. Optimizing such functions by vanilla BO is extremely
time-consuming. Alternative strategies for high-dimensional BO that are based
on the idea of embedding the high-dimensional space to the one with low
dimension are sensitive to the choice of the embedding dimension, which needs
to be pre-specified. We develop a new computationally efficient
high-dimensional BO method that exploits variable selection. Our method is able
to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected
variables, without the demand of any pre-specified hyperparameters. We
theoretically analyze the computational complexity of our algorithm and derive
the regret bound. We empirically show the efficacy of our method on several
synthetic and real problems.
Related papers
- High-Dimensional Bayesian Optimization Using Both Random and Supervised Embeddings [0.6291443816903801]
This paper proposes a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension.
The resulting BO method combines in an adaptive way both random and supervised linear embeddings.
The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems.
arXiv Detail & Related papers (2025-02-02T16:57:05Z) - High-Dimensional Bayesian Optimization via Random Projection of Manifold Subspaces [0.0]
A common framework to tackle this problem is to assume that the objective function depends on a limited set of features that lie on a low-dimensional manifold embedded in the high-dimensional ambient space.
This paper proposes a new approach for BO in high dimensions by exploiting a new representation of the objective function.
Our approach enables efficient optimizing of BO's acquisition function in the low-dimensional space, with the advantage of projecting back to the original high-dimensional space compared to existing works in the same setting.
arXiv Detail & Related papers (2024-12-21T09:41:24Z) - BOIDS: High-dimensional Bayesian Optimization via Incumbent-guided Direction Lines and Subspace Embeddings [14.558601519561721]
We introduce BOIDS, a novel high-dimensional BO algorithm that guides optimization by a sequence of one-dimensional direction lines.
We also propose an adaptive selection technique to identify most optimal lines for each round of line-based optimization.
Our experimental results show that BOIDS outperforms state-of-the-art baselines on various synthetic and real-world benchmark problems.
arXiv Detail & Related papers (2024-12-17T13:51:24Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Non-Convex Bilevel Optimization with Time-Varying Objective Functions [57.299128109226025]
We propose an online bilevel optimization where the functions can be time-varying and the agent continuously updates the decisions with online data.
Compared to existing algorithms, SOBOW is computationally efficient and does not need to know previous functions.
We show that SOBOW can achieve a sublinear bilevel local regret under mild conditions.
arXiv Detail & Related papers (2023-08-07T06:27:57Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection [1.1240669509034296]
We propose to leverage results from optimal control to scale BO to higher dimensional control tasks.
We show how we can make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem.
We present an experimental evaluation on real hardware, as well as simulated tasks including a 48-dimensional policy for a quadcopter.
arXiv Detail & Related papers (2020-01-21T09:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.