Overfitting in Bayesian Optimization: an empirical study and
early-stopping solution
- URL: http://arxiv.org/abs/2104.08166v1
- Date: Fri, 16 Apr 2021 15:26:23 GMT
- Title: Overfitting in Bayesian Optimization: an empirical study and
early-stopping solution
- Authors: Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean
Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau
- Abstract summary: We propose the first problem-adaptive and interpretable criterion to early stop BO.
We show that our approach can substantially reduce compute time with little to no loss of test accuracy.
- Score: 41.782410830989136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian Optimization (BO) is a successful methodology to tune the
hyperparameters of machine learning algorithms. The user defines a metric of
interest, such as the validation error, and BO finds the optimal
hyperparameters that minimize it. However, the metric improvements on the
validation set may not translate to the test set, especially on small datasets.
In other words, BO can overfit. While cross-validation mitigates this, it comes
with high computational cost. In this paper, we carry out the first systematic
investigation of overfitting in BO and demonstrate that this is a serious yet
often overlooked concern in practice. We propose the first problem-adaptive and
interpretable criterion to early stop BO, reducing overfitting while mitigating
the cost of cross-validation. Experimental results on real-world hyperparameter
optimization tasks show that our approach can substantially reduce compute time
with little to no loss of test accuracy,demonstrating a clear practical
advantage over existing techniques.
Related papers
- Provably Efficient Bayesian Optimization with Unknown Gaussian Process Hyperparameter Estimation [44.53678257757108]
We propose a new BO method that can sub-linearly converge to the objective function's global optimum.
Our method uses a multi-armed bandit technique (EXP3) to add random data points to the BO process.
We demonstrate empirically that our method outperforms existing approaches on various synthetic and real-world problems.
arXiv Detail & Related papers (2023-06-12T03:35:45Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Two-step Lookahead Bayesian Optimization with Inequality Constraints [21.703234193908038]
We propose a two-step lookahead constrained Bayesian optimization acquisition function (2-OPT-C) supporting both sequential and batch settings.
In numerical experiments, 2-OPT-C typically improves query efficiency by 2x or more over previous methods, and in some cases by 10x or more.
arXiv Detail & Related papers (2021-12-06T07:40:54Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Cost-Efficient Online Hyperparameter Optimization [94.60924644778558]
We propose an online HPO algorithm that reaches human expert-level performance within a single run of the experiment.
Our proposed online HPO algorithm reaches human expert-level performance within a single run of the experiment, while incurring only modest computational overhead compared to regular training.
arXiv Detail & Related papers (2021-01-17T04:55:30Z) - Efficient hyperparameter optimization by way of PAC-Bayes bound
minimization [4.191847852775072]
We present an alternative objective that is equivalent to a Probably Approximately Correct-Bayes (PAC-Bayes) bound on the expected out-of-sample error.
We then devise an efficient gradient-based algorithm to minimize this objective.
arXiv Detail & Related papers (2020-08-14T15:54:51Z) - A Gradient-based Bilevel Optimization Approach for Tuning
Hyperparameters in Machine Learning [0.0]
We propose a bilevel solution method for solving the hyperparameter optimization problem.
The proposed method is general and can be easily applied to any class of machine learning algorithms.
We discuss the theory behind the proposed algorithm and perform extensive computational study on two datasets.
arXiv Detail & Related papers (2020-07-21T18:15:08Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.