Towards Automated Design of Bayesian Optimization via Exploratory
Landscape Analysis
- URL: http://arxiv.org/abs/2211.09678v1
- Date: Thu, 17 Nov 2022 17:15:04 GMT
- Title: Towards Automated Design of Bayesian Optimization via Exploratory
Landscape Analysis
- Authors: Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom,
Marius Lindauer, Carola Doerr
- Abstract summary: We show that a dynamic selection of the AF can benefit the BO design.
We pave a way towards AutoML-assisted, on-the-fly BO designs that adjust their behavior on a run-by-run basis.
- Score: 11.143778114800272
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Bayesian optimization (BO) algorithms form a class of surrogate-based
heuristics, aimed at efficiently computing high-quality solutions for numerical
black-box optimization problems. The BO pipeline is highly modular, with
different design choices for the initial sampling strategy, the surrogate
model, the acquisition function (AF), the solver used to optimize the AF, etc.
We demonstrate in this work that a dynamic selection of the AF can benefit the
BO design. More precisely, we show that already a na\"ive random forest
regression model, built on top of exploratory landscape analysis features that
are computed from the initial design points, suffices to recommend AFs that
outperform any static choice, when considering performance over the classic
BBOB benchmark suite for derivative-free numerical optimization methods on the
COCO platform. Our work hence paves a way towards AutoML-assisted, on-the-fly
BO designs that adjust their behavior on a run-by-run basis.
Related papers
- Landscape-Aware Automated Algorithm Configuration using Multi-output Mixed Regression and Classification [0.01649298969786889]
We investigate the potential of randomly generated functions (RGF) for the model training.
We focus on automated algorithm configuration (AAC)
We analyze the performance of dense neural network (NN) models in handling the mixed regression and classification tasks.
arXiv Detail & Related papers (2024-09-02T20:04:41Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Predictive Modeling through Hyper-Bayesian Optimization [60.586813904500595]
We propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
The algorithm moves back and forth between BO in the model space and BO in the function space, where the goodness of the recommended model is captured.
In addition to improved sample efficiency, the framework outputs information about the black-box function.
arXiv Detail & Related papers (2023-08-01T04:46:58Z) - Self-Adjusting Weighted Expected Improvement for Bayesian Optimization [11.955557264002204]
This work focuses on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.
We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner.
Our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.
arXiv Detail & Related papers (2023-06-07T09:00:19Z) - Optimization for truss design using Bayesian optimization [1.5070398746522742]
The shape of the truss is a dominant factor in determining the capacity of load it can bear.
At a given parameter space, our goal is to find the parameters of a hull that maximize the load-bearing capacity and also don't yield to the induced stress.
We rely on finite element analysis, which is a computationally costly design analysis tool for design evaluation.
arXiv Detail & Related papers (2023-05-27T10:28:27Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - BOSH: Bayesian Optimization by Sampling Hierarchically [10.10241176664951]
We propose a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations.
We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper- parameter tuning tasks.
arXiv Detail & Related papers (2020-07-02T07:35:49Z) - Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection [1.1240669509034296]
We propose to leverage results from optimal control to scale BO to higher dimensional control tasks.
We show how we can make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem.
We present an experimental evaluation on real hardware, as well as simulated tasks including a 48-dimensional policy for a quadcopter.
arXiv Detail & Related papers (2020-01-21T09:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.