LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions
- URL: http://arxiv.org/abs/2311.11328v2
- Date: Sun, 16 Jun 2024 10:22:52 GMT
- Title: LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions
- Authors: E. Visser, C. E. van Daalen, J. C. Schoeman,
- Abstract summary: We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization (BO) is a popular method for optimizing expensive black-box functions. BO has several well-documented shortcomings, including computational slowdown with longer optimization runs, poor suitability for non-stationary or ill-conditioned objective functions, and poor convergence characteristics. Several algorithms have been proposed that incorporate local strategies, such as trust regions, into BO to mitigate these limitations; however, none address all of them satisfactorily. To address these shortcomings, we propose the LABCAT algorithm, which extends trust-region-based BO by adding a rotation aligning the trust region with the weighted principal components and an adaptive rescaling strategy based on the length-scales of a local Gaussian process surrogate model with automatic relevance determination. Through extensive numerical experiments using a set of synthetic test functions and the well-known COCO benchmarking software, we show that the LABCAT algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
Related papers
- Robust Bayesian Optimization via Localized Online Conformal Prediction [37.549297668783254]
We introduce localized online conformal prediction-based Bayesian optimization (LOCBO)
LOCBO calibrates the GP model through localized online conformal prediction (CP)
We provide theoretical performance guarantees for LOCBO's iterates that hold for the unobserved objective function.
arXiv Detail & Related papers (2024-11-26T12:45:54Z) - Bayesian Optimization for Hyperparameters Tuning in Neural Networks [0.0]
Bayesian Optimization is a derivative-free global optimization method suitable for black-box functions with continuous inputs and limited evaluation budgets.
This study investigates the application of BO for the hyper parameter tuning of neural networks, specifically targeting the enhancement of Convolutional Neural Networks (CNN)
Experimental outcomes reveal that BO effectively balances exploration and exploitation, converging rapidly towards optimal settings for CNN architectures.
This approach underlines the potential of BO in automating neural network tuning, contributing to improved accuracy and computational efficiency in machine learning pipelines.
arXiv Detail & Related papers (2024-10-29T09:23:24Z) - Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - High-dimensional Bayesian Optimization via Covariance Matrix Adaptation
Strategy [16.521207412129833]
We propose a novel technique for defining the local regions using the Covariance Matrix Adaptation (CMA) strategy.
Based on this search distribution, we then define the local regions consisting of data points with high probabilities of being the global optimum.
Our approach serves as a meta-algorithm as it can incorporate existing black-box BOs, such as BO, TuRBO, and BAxUS, to find the global optimum.
arXiv Detail & Related papers (2024-02-05T15:32:10Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Sparse Bayesian Optimization [16.867375370457438]
We present several regularization-based approaches that allow us to discover sparse and more interpretable configurations.
We propose a novel differentiable relaxation based on homotopy continuation that makes it possible to target sparsity.
We show that we are able to efficiently optimize for sparsity.
arXiv Detail & Related papers (2022-03-03T18:25:33Z) - TREGO: a Trust-Region Framework for Efficient Global Optimization [63.995130144110156]
We propose and analyze a trust-region-like EGO method (TREGO)
TREGO alternates between regular EGO steps and local steps within a trust region.
Our algorithm enjoys strong global convergence properties, while departing from EGO only for a subset of optimization steps.
arXiv Detail & Related papers (2021-01-18T00:14:40Z) - Scalable Constrained Bayesian Optimization [10.820024633762596]
The global optimization of a high-dimensional black-box function under black-box constraints is a pervasive task in machine learning, control, and the scientific community.
We propose the scalable constrained Bayesian optimization (SCBO) algorithm that overcomes the above challenges and pushes the state-the-art.
arXiv Detail & Related papers (2020-02-20T01:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.