Good practices for Bayesian Optimization of high dimensional structured
spaces
- URL: http://arxiv.org/abs/2012.15471v2
- Date: Wed, 6 Jan 2021 14:24:42 GMT
- Title: Good practices for Bayesian Optimization of high dimensional structured
spaces
- Authors: Eero Siivola, Javier Gonzalez, Andrei Paleyes, Aki Vehtari
- Abstract summary: We study the effect of different search space design choices for performing Bayesian Optimization in high dimensional structured datasets.
We evaluate new methods to automatically define the optimization bounds in the latent space.
We provide recommendations for the practitioners.
- Score: 15.488642552157131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing availability of structured but high dimensional data has
opened new opportunities for optimization. One emerging and promising avenue is
the exploration of unsupervised methods for projecting structured high
dimensional data into low dimensional continuous representations, simplifying
the optimization problem and enabling the application of traditional
optimization methods. However, this line of research has been purely
methodological with little connection to the needs of practitioners so far. In
this paper, we study the effect of different search space design choices for
performing Bayesian Optimization in high dimensional structured datasets. In
particular, we analyse the influence of the dimensionality of the latent space,
the role of the acquisition function and evaluate new methods to automatically
define the optimization bounds in the latent space. Finally, based on
experimental results using synthetic and real datasets, we provide
recommendations for the practitioners.
Related papers
- An Adaptive Dimension Reduction Estimation Method for High-dimensional
Bayesian Optimization [6.79843988450982]
We propose a two-step optimization framework to extend BO to high-dimensional settings.
Our algorithm offers the flexibility to operate these steps either concurrently or in sequence.
Numerical experiments validate the efficacy of our method in challenging scenarios.
arXiv Detail & Related papers (2024-03-08T16:21:08Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - DADO -- Low-Cost Query Strategies for Deep Active Design Optimization [1.6298921134113031]
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
arXiv Detail & Related papers (2023-07-10T13:01:27Z) - Linear Embedding-based High-dimensional Batch Bayesian Optimization
without Reconstruction Mappings [21.391136086094225]
We show that our method is applicable to batch optimization problems with thousands of dimensions without any computational difficulty.
We demonstrate the effectiveness of our method on high-dimensional benchmarks and a real-world function.
arXiv Detail & Related papers (2022-11-02T08:11:10Z) - Fighting the curse of dimensionality: A machine learning approach to
finding global optima [77.34726150561087]
This paper shows how to find global optima in structural optimization problems.
By exploiting certain cost functions we either obtain the global at best or obtain superior results at worst when compared to established optimization procedures.
arXiv Detail & Related papers (2021-10-28T09:50:29Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - High-Dimensional Bayesian Optimization via Nested Riemannian Manifolds [0.0]
We propose to exploit the geometry of non-Euclidean search spaces, which often arise in a variety of domains, to learn structure-preserving mappings.
Our approach features geometry-aware Gaussian processes that jointly learn a nested-manifold embedding and a representation of the objective function in the latent space.
arXiv Detail & Related papers (2020-10-21T11:24:11Z) - Sample-Efficient Optimization in the Latent Space of Deep Generative
Models via Weighted Retraining [1.5293427903448025]
We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.
We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value.
This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems.
arXiv Detail & Related papers (2020-06-16T14:34:40Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Human Preference-Based Learning for High-dimensional Optimization of
Exoskeleton Walking Gaits [55.59198568303196]
This work presents LineCoSpar, a human-in-the-loop preference-based framework to learn user preferences in high dimensions.
In simulations and human trials, we empirically verify that LineCoSpar is a sample-efficient approach for high-dimensional preference optimization.
This result has implications for exoskeleton gait synthesis, an active field with applications to clinical use and patient rehabilitation.
arXiv Detail & Related papers (2020-03-13T22:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.