Linear Embedding-based High-dimensional Batch Bayesian Optimization
without Reconstruction Mappings
- URL: http://arxiv.org/abs/2211.00947v1
- Date: Wed, 2 Nov 2022 08:11:10 GMT
- Title: Linear Embedding-based High-dimensional Batch Bayesian Optimization
without Reconstruction Mappings
- Authors: Shuhei A. Horiguchi, Tomoharu Iwata, Taku Tsuzuki, Yosuke Ozawa
- Abstract summary: We show that our method is applicable to batch optimization problems with thousands of dimensions without any computational difficulty.
We demonstrate the effectiveness of our method on high-dimensional benchmarks and a real-world function.
- Score: 21.391136086094225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimization of high-dimensional black-box functions is a challenging
problem. When a low-dimensional linear embedding structure can be assumed,
existing Bayesian optimization (BO) methods often transform the original
problem into optimization in a low-dimensional space. They exploit the
low-dimensional structure and reduce the computational burden. However, we
reveal that this approach could be limited or inefficient in exploring the
high-dimensional space mainly due to the biased reconstruction of the
high-dimensional queries from the low-dimensional queries. In this paper, we
investigate a simple alternative approach: tackling the problem in the original
high-dimensional space using the information from the learned low-dimensional
structure. We provide a theoretical analysis of the exploration ability.
Furthermore, we show that our method is applicable to batch optimization
problems with thousands of dimensions without any computational difficulty. We
demonstrate the effectiveness of our method on high-dimensional benchmarks and
a real-world function.
Related papers
- On Probabilistic Embeddings in Optimal Dimension Reduction [1.2085509610251701]
Dimension reduction algorithms are a crucial part of many data science pipelines.
Despite their wide utilization, many non-linear dimension reduction algorithms are poorly understood from a theoretical perspective.
arXiv Detail & Related papers (2024-08-05T12:46:21Z) - PMaF: Deep Declarative Layers for Principal Matrix Features [37.662505982849844]
We explore two differentiable deep declarative layers, namely least squares on sphere (LESS) and implicit eigen decomposition (IED)
LESS can be used to represent data features with a low-dimensional vector containing dominant information from a high-dimensional matrix.
IED can be used to represent data features with a low-dimensional vector containing dominant information from a high-dimensional matrix.
arXiv Detail & Related papers (2023-06-26T15:13:36Z) - Fighting the curse of dimensionality: A machine learning approach to
finding global optima [77.34726150561087]
This paper shows how to find global optima in structural optimization problems.
By exploiting certain cost functions we either obtain the global at best or obtain superior results at worst when compared to established optimization procedures.
arXiv Detail & Related papers (2021-10-28T09:50:29Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Good practices for Bayesian Optimization of high dimensional structured
spaces [15.488642552157131]
We study the effect of different search space design choices for performing Bayesian Optimization in high dimensional structured datasets.
We evaluate new methods to automatically define the optimization bounds in the latent space.
We provide recommendations for the practitioners.
arXiv Detail & Related papers (2020-12-31T07:00:39Z) - Parameter Optimization using high-dimensional Bayesian Optimization [0.0]
We focus on solutions to practical problems, such as tuning the parameters for an electron accelerator, or for even simpler tasks that can be run and optimized just in time with a standard laptop at hand.
Our main contributions are 1.) comparing how the log-likelihood affects the angle-difference in the real projection matrix, and the found matrix matrix, 2.
A short analysis on how dimensionality reduction techniques can be used for feature selection, and 4.) a novel algorithm called "BORING", which allows for a simple fallback mechanism if the matrix identification fails.
arXiv Detail & Related papers (2020-10-05T13:13:28Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.