High Dimensional Bayesian Optimization using Lasso Variable Selection
- URL: http://arxiv.org/abs/2504.01743v1
- Date: Wed, 02 Apr 2025 13:54:04 GMT
- Title: High Dimensional Bayesian Optimization using Lasso Variable Selection
- Authors: Vu Viet Hoang, Hung The Tran, Sunil Gupta, Vu Nguyen,
- Abstract summary: We introduce a novel method that identifies important variables by estimating the length scales of Gaussian process kernels.<n>We demonstrate that our proposed method achieves cumulative regret with a sublinear growth rate in the worst case.<n> Experiments on high-dimensional synthetic functions and real-world problems show that our method achieves state-of-the-art performance.
- Score: 9.051539805042651
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization (BO) is a leading method for optimizing expensive black-box optimization and has been successfully applied across various scenarios. However, BO suffers from the curse of dimensionality, making it challenging to scale to high-dimensional problems. Existing work has adopted a variable selection strategy to select and optimize only a subset of variables iteratively. Although this approach can mitigate the high-dimensional challenge in BO, it still leads to sample inefficiency. To address this issue, we introduce a novel method that identifies important variables by estimating the length scales of Gaussian process kernels. Next, we construct an effective search region consisting of multiple subspaces and optimize the acquisition function within this region, focusing on only the important variables. We demonstrate that our proposed method achieves cumulative regret with a sublinear growth rate in the worst case while maintaining computational efficiency. Experiments on high-dimensional synthetic functions and real-world problems show that our method achieves state-of-the-art performance.
Related papers
- An Adaptive Dropout Approach for High-Dimensional Bayesian Optimization [0.0]
We propose AdaDropout to tackle high-dimensional challenges and improve solution quality.
It achieves superior results when compared with state-of-the-art high-dimensional Bayesian optimization approaches.
arXiv Detail & Related papers (2025-04-15T16:23:25Z) - High-Dimensional Bayesian Optimization Using Both Random and Supervised Embeddings [0.6291443816903801]
This paper proposes a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension.
The resulting BO method combines in an adaptive way both random and supervised linear embeddings.
The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems.
arXiv Detail & Related papers (2025-02-02T16:57:05Z) - Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - An Adaptive Dimension Reduction Estimation Method for High-dimensional
Bayesian Optimization [6.79843988450982]
We propose a two-step optimization framework to extend BO to high-dimensional settings.
Our algorithm offers the flexibility to operate these steps either concurrently or in sequence.
Numerical experiments validate the efficacy of our method in challenging scenarios.
arXiv Detail & Related papers (2024-03-08T16:21:08Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection [0.5439020425818999]
We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
arXiv Detail & Related papers (2021-09-20T01:55:43Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.