Self-Correcting Bayesian Optimization through Bayesian Active Learning
- URL: http://arxiv.org/abs/2304.11005v3
- Date: Thu, 15 Feb 2024 13:52:33 GMT
- Title: Self-Correcting Bayesian Optimization through Bayesian Active Learning
- Authors: Carl Hvarfner, Erik Hellsten, Frank Hutter, Luigi Nardi
- Abstract summary: We present two acquisition functions that explicitly prioritize hyperparameter learning.
We then introduce Self-Correcting Bayesian Optimization (SCoreBO), which extends SAL to perform Bayesian optimization and active learning simultaneously.
- Score: 46.235017111395344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian processes are the model of choice in Bayesian optimization and
active learning. Yet, they are highly dependent on cleverly chosen
hyperparameters to reach their full potential, and little effort is devoted to
finding good hyperparameters in the literature. We demonstrate the impact of
selecting good hyperparameters for GPs and present two acquisition functions
that explicitly prioritize hyperparameter learning. Statistical distance-based
Active Learning (SAL) considers the average disagreement between samples from
the posterior, as measured by a statistical distance. SAL outperforms the
state-of-the-art in Bayesian active learning on several test functions. We then
introduce Self-Correcting Bayesian Optimization (SCoreBO), which extends SAL to
perform Bayesian optimization and active learning simultaneously. SCoreBO
learns the model hyperparameters at improved rates compared to vanilla BO,
while outperforming the latest Bayesian optimization methods on traditional
benchmarks. Moreover, we demonstrate the importance of self-correction on
atypical Bayesian optimization tasks.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - High-dimensional Bayesian Optimization with Group Testing [7.12295305987761]
We propose a group testing approach to identify active variables to facilitate efficient optimization in high-dimensional domains.
The proposed algorithm, Group Testing Bayesian Optimization (GTBO), first runs a testing phase where groups of variables are systematically selected and tested.
In the second phase, GTBO guides optimization by placing more importance on the active dimensions.
arXiv Detail & Related papers (2023-10-05T12:52:27Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - BOSH: Bayesian Optimization by Sampling Hierarchically [10.10241176664951]
We propose a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations.
We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper- parameter tuning tasks.
arXiv Detail & Related papers (2020-07-02T07:35:49Z) - Weighting Is Worth the Wait: Bayesian Optimization with Importance
Sampling [34.67740033646052]
We improve upon Bayesian optimization state-of-the-art runtime and final validation error across a variety of datasets and complex neural architectures.
By learning a parameterization of IS that trades-off evaluation complexity and quality, we improve upon Bayesian optimization state-of-the-art runtime and final validation error across a variety of datasets and complex neural architectures.
arXiv Detail & Related papers (2020-02-23T15:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.