Semi-supervised Embedding Learning for High-dimensional Bayesian
Optimization
- URL: http://arxiv.org/abs/2005.14601v3
- Date: Mon, 19 Oct 2020 05:36:39 GMT
- Title: Semi-supervised Embedding Learning for High-dimensional Bayesian
Optimization
- Authors: Jingfan Chen, Guanghui Zhu, Chunfeng Yuan, Yihua Huang
- Abstract summary: We propose a novel framework, which finds a low-dimensional space to perform Bayesian optimization iteratively through semi-supervised dimension reduction.
SILBO incorporates both labeled points and unlabeled points acquired from the acquisition function to guide the embedding space learning.
We show that SILBO outperforms the existing state-of-the-art high-dimensional Bayesian optimization methods.
- Score: 12.238019485880583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization is a broadly applied methodology to optimize the
expensive black-box function. Despite its success, it still faces the challenge
from the high-dimensional search space. To alleviate this problem, we propose a
novel Bayesian optimization framework (termed SILBO), which finds a
low-dimensional space to perform Bayesian optimization iteratively through
semi-supervised dimension reduction. SILBO incorporates both labeled points and
unlabeled points acquired from the acquisition function to guide the embedding
space learning. To accelerate the learning procedure, we present a randomized
method for generating the projection matrix. Furthermore, to map from the
low-dimensional space to the high-dimensional original space, we propose two
mapping strategies: $\text{SILBO}_{FZ}$ and $\text{SILBO}_{FX}$ according to
the evaluation overhead of the objective function. Experimental results on both
synthetic function and hyperparameter optimization tasks demonstrate that SILBO
outperforms the existing state-of-the-art high-dimensional Bayesian
optimization methods.
Related papers
- High-Dimensional Bayesian Optimization Using Both Random and Supervised Embeddings [0.6291443816903801]
This paper proposes a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension.
The resulting BO method combines in an adaptive way both random and supervised linear embeddings.
The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems.
arXiv Detail & Related papers (2025-02-02T16:57:05Z) - High-Dimensional Bayesian Optimization via Random Projection of Manifold Subspaces [0.0]
A common framework to tackle this problem is to assume that the objective function depends on a limited set of features that lie on a low-dimensional manifold embedded in the high-dimensional ambient space.
This paper proposes a new approach for BO in high dimensions by exploiting a new representation of the objective function.
Our approach enables efficient optimizing of BO's acquisition function in the low-dimensional space, with the advantage of projecting back to the original high-dimensional space compared to existing works in the same setting.
arXiv Detail & Related papers (2024-12-21T09:41:24Z) - BOIDS: High-dimensional Bayesian Optimization via Incumbent-guided Direction Lines and Subspace Embeddings [14.558601519561721]
We introduce BOIDS, a novel high-dimensional BO algorithm that guides optimization by a sequence of one-dimensional direction lines.
We also propose an adaptive selection technique to identify most optimal lines for each round of line-based optimization.
Our experimental results show that BOIDS outperforms state-of-the-art baselines on various synthetic and real-world benchmark problems.
arXiv Detail & Related papers (2024-12-17T13:51:24Z) - High dimensional Bayesian Optimization via Condensing-Expansion Projection [1.6355174910200032]
In high-dimensional settings, Bayesian optimization (BO) can be expensive and infeasible.
We introduce a novel random projection-based approach for high-dimensional BO that does not reply on the effective subspace assumption.
Experimental results demonstrate that both algorithms outperform existing random embedding-based algorithms in most cases.
arXiv Detail & Related papers (2024-08-09T04:47:38Z) - Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization [71.35604981129838]
Bi-level optimization has become a fundamental mathematical framework for addressing hierarchical machine learning problems.
Traditional gradient-based bi-level optimization algorithms are ill-suited to meet the demands of large-scale applications.
We introduce $(textFG)2textU$, which achieves an unbiased approximation of the meta gradient for bi-level optimization.
arXiv Detail & Related papers (2024-06-20T08:21:52Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - High-Dimensional Bayesian Optimization with Sparse Axis-Aligned
Subspaces [14.03847432040056]
We argue that a surrogate model defined on sparse axis-aligned subspaces offer an attractive compromise between flexibility and parsimony.
We demonstrate that our approach, which relies on Hamiltonian Monte Carlo for inference, can rapidly identify sparse subspaces relevant to modeling the unknown objective function.
arXiv Detail & Related papers (2021-02-27T23:06:24Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.