Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection
- URL: http://arxiv.org/abs/2109.09264v2
- Date: Mon, 12 Feb 2024 17:22:28 GMT
- Title: Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection
- Authors: Yihang Shen and Carl Kingsford
- Abstract summary: We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
- Score: 0.5439020425818999
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bayesian Optimization (BO) is a method for globally optimizing black-box
functions. While BO has been successfully applied to many scenarios, developing
effective BO algorithms that scale to functions with high-dimensional domains
is still a challenge. Optimizing such functions by vanilla BO is extremely
time-consuming. Alternative strategies for high-dimensional BO that are based
on the idea of embedding the high-dimensional space to the one with low
dimension are sensitive to the choice of the embedding dimension, which needs
to be pre-specified. We develop a new computationally efficient
high-dimensional BO method that exploits variable selection. Our method is able
to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected
variables, without the demand of any pre-specified hyperparameters. We
theoretically analyze the computational complexity of our algorithm and derive
the regret bound. We empirically show the efficacy of our method on several
synthetic and real problems.
Related papers
- High dimensional Bayesian Optimization via Condensing-Expansion Projection [1.6355174910200032]
In high-dimensional settings, Bayesian optimization (BO) can be expensive and infeasible.
We introduce a novel random projection-based approach for high-dimensional BO that does not reply on the effective subspace assumption.
Experimental results demonstrate that both algorithms outperform existing random embedding-based algorithms in most cases.
arXiv Detail & Related papers (2024-08-09T04:47:38Z) - SCORE: A 1D Reparameterization Technique to Break Bayesian Optimization's Curse of Dimensionality [0.0]
A 1D reparametrization trick is proposed to break this curse and sustain linear time complexity for BO in high-dimensional landscapes.
This fast and scalable approach named SCORE can successfully find the global minimum of needle-in-a-haystack optimization functions.
arXiv Detail & Related papers (2024-06-18T14:28:29Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Non-Convex Bilevel Optimization with Time-Varying Objective Functions [57.299128109226025]
We propose an online bilevel optimization where the functions can be time-varying and the agent continuously updates the decisions with online data.
Compared to existing algorithms, SOBOW is computationally efficient and does not need to know previous functions.
We show that SOBOW can achieve a sublinear bilevel local regret under mild conditions.
arXiv Detail & Related papers (2023-08-07T06:27:57Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - High-Dimensional Bayesian Optimization with Sparse Axis-Aligned
Subspaces [14.03847432040056]
We argue that a surrogate model defined on sparse axis-aligned subspaces offer an attractive compromise between flexibility and parsimony.
We demonstrate that our approach, which relies on Hamiltonian Monte Carlo for inference, can rapidly identify sparse subspaces relevant to modeling the unknown objective function.
arXiv Detail & Related papers (2021-02-27T23:06:24Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection [1.1240669509034296]
We propose to leverage results from optimal control to scale BO to higher dimensional control tasks.
We show how we can make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem.
We present an experimental evaluation on real hardware, as well as simulated tasks including a 48-dimensional policy for a quadcopter.
arXiv Detail & Related papers (2020-01-21T09:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.