BOIDS: High-dimensional Bayesian Optimization via Incumbent-guided Direction Lines and Subspace Embeddings
- URL: http://arxiv.org/abs/2412.12918v1
- Date: Tue, 17 Dec 2024 13:51:24 GMT
- Title: BOIDS: High-dimensional Bayesian Optimization via Incumbent-guided Direction Lines and Subspace Embeddings
- Authors: Lam Ngo, Huong Ha, Jeffrey Chan, Hongyu Zhang,
- Abstract summary: We introduce BOIDS, a novel high-dimensional BO algorithm that guides optimization by a sequence of one-dimensional direction lines.
We also propose an adaptive selection technique to identify most optimal lines for each round of line-based optimization.
Our experimental results show that BOIDS outperforms state-of-the-art baselines on various synthetic and real-world benchmark problems.
- Score: 14.558601519561721
- License:
- Abstract: When it comes to expensive black-box optimization problems, Bayesian Optimization (BO) is a well-known and powerful solution. Many real-world applications involve a large number of dimensions, hence scaling BO to high dimension is of much interest. However, state-of-the-art high-dimensional BO methods still suffer from the curse of dimensionality, highlighting the need for further improvements. In this work, we introduce BOIDS, a novel high-dimensional BO algorithm that guides optimization by a sequence of one-dimensional direction lines using a novel tailored line-based optimization procedure. To improve the efficiency, we also propose an adaptive selection technique to identify most optimal lines for each round of line-based optimization. Additionally, we incorporate a subspace embedding technique for better scaling to high-dimensional spaces. We further provide theoretical analysis of our proposed method to analyze its convergence property. Our extensive experimental results show that BOIDS outperforms state-of-the-art baselines on various synthetic and real-world benchmark problems.
Related papers
- High-Dimensional Bayesian Optimization Using Both Random and Supervised Embeddings [0.6291443816903801]
This paper proposes a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension.
The resulting BO method combines in an adaptive way both random and supervised linear embeddings.
The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems.
arXiv Detail & Related papers (2025-02-02T16:57:05Z) - Dimensionality Reduction Techniques for Global Bayesian Optimisation [1.433758865948252]
We explore Latent Space Bayesian optimisation, that applies dimensionality reduction to perform BO in a reduced-dimensional subspace.
We employ Variational Autoencoders (VAEs) to manage more complex data structures and general DR tasks.
We suggest a few key corrections in their implementation, originally designed for tasks such as molecule generation, and reformulate the algorithm for broader optimisation purposes.
arXiv Detail & Related papers (2024-12-12T11:27:27Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection [0.5439020425818999]
We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
arXiv Detail & Related papers (2021-09-20T01:55:43Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Unified Convergence Analysis for Adaptive Optimization with Moving Average Estimator [75.05106948314956]
We show that an increasing large momentum parameter for the first-order moment is sufficient for adaptive scaling.
We also give insights for increasing the momentum in a stagewise manner in accordance with stagewise decreasing step size.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Semi-supervised Embedding Learning for High-dimensional Bayesian
Optimization [12.238019485880583]
We propose a novel framework, which finds a low-dimensional space to perform Bayesian optimization iteratively through semi-supervised dimension reduction.
SILBO incorporates both labeled points and unlabeled points acquired from the acquisition function to guide the embedding space learning.
We show that SILBO outperforms the existing state-of-the-art high-dimensional Bayesian optimization methods.
arXiv Detail & Related papers (2020-05-29T14:37:12Z) - Re-Examining Linear Embeddings for High-Dimensional Bayesian
Optimization [20.511115436145467]
We identify several crucial issues and misconceptions about the use of linear embeddings for BO.
We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO.
arXiv Detail & Related papers (2020-01-31T05:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.