On the role of Model Uncertainties in Bayesian Optimization
- URL: http://arxiv.org/abs/2301.05983v1
- Date: Sat, 14 Jan 2023 21:45:17 GMT
- Title: On the role of Model Uncertainties in Bayesian Optimization
- Authors: Jonathan Foldager, Mikkel Jordahn, Lars Kai Hansen, Michael Riis
Andersen
- Abstract summary: We study the relationship between the BO performance (regret) and uncertainty calibration for popular surrogate models.
Our results show a positive association between calibration error and regret, but interestingly, this association disappears when we control for the type of model in the analysis.
- Score: 8.659630453400593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) is a popular method for black-box optimization,
which relies on uncertainty as part of its decision-making process when
deciding which experiment to perform next. However, not much work has addressed
the effect of uncertainty on the performance of the BO algorithm and to what
extent calibrated uncertainties improve the ability to find the global optimum.
In this work, we provide an extensive study of the relationship between the BO
performance (regret) and uncertainty calibration for popular surrogate models
and compare them across both synthetic and real-world experiments. Our results
confirm that Gaussian Processes are strong surrogate models and that they tend
to outperform other popular models. Our results further show a positive
association between calibration error and regret, but interestingly, this
association disappears when we control for the type of model in the analysis.
We also studied the effect of re-calibration and demonstrate that it generally
does not lead to improved regret. Finally, we provide theoretical justification
for why uncertainty calibration might be difficult to combine with BO due to
the small sample sizes commonly used.
Related papers
- Co-Learning Bayesian Optimization [28.394424693363103]
We propose a novel BO algorithm labeled as co-learning BO (CLBO), which exploits both model diversity and agreement on unlabeled information to improve the overall surrogate accuracy with limited samples.
Through tests on five numerical toy problems and three engineering benchmarks, the effectiveness of proposed CLBO has been well demonstrated.
arXiv Detail & Related papers (2025-01-23T02:25:10Z) - Epidemiological Model Calibration via Graybox Bayesian Optimization [13.298472586395276]
Experimental results demonstrate that our proposed graybox variants of BO schemes can efficiently calibrate computationally expensive models.
We anticipate that the proposed calibration methods can be extended to enable fast calibration of more complex epidemiological models.
arXiv Detail & Related papers (2024-12-10T05:04:52Z) - Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.