Uncertainty Quantification for Bayesian Optimization
- URL: http://arxiv.org/abs/2002.01569v2
- Date: Fri, 5 May 2023 12:41:04 GMT
- Title: Uncertainty Quantification for Bayesian Optimization
- Authors: Rui Tuo, Wenjia Wang
- Abstract summary: We propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, which proceeds by constructing confidence regions of the maximum point (or value) of the objective function.
Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
- Score: 12.433600693422235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization is a class of global optimization techniques. In
Bayesian optimization, the underlying objective function is modeled as a
realization of a Gaussian process. Although the Gaussian process assumption
implies a random distribution of the Bayesian optimization outputs,
quantification of this uncertainty is rarely studied in the literature. In this
work, we propose a novel approach to assess the output uncertainty of Bayesian
optimization algorithms, which proceeds by constructing confidence regions of
the maximum point (or value) of the objective function. These regions can be
computed efficiently, and their confidence levels are guaranteed by the uniform
error bounds for sequential Gaussian process regression newly developed in the
present work. Our theory provides a unified uncertainty quantification
framework for all existing sequential sampling policies and stopping criteria.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Stochastic Bayesian Optimization with Unknown Continuous Context
Distribution via Kernel Density Estimation [28.413085548038932]
We propose two algorithms that employ kernel density estimation to learn the probability density function (PDF) of continuous context variable online.
Theoretical results demonstrate that both algorithms have sub-linear Bayesian cumulative regret on the expectation objective.
arXiv Detail & Related papers (2023-12-16T11:32:28Z) - Quantization-Based Optimization: Alternative Stochastic Approximation of
Global Optimization [0.0]
We propose a global optimization algorithm based on quantizing the energy level of an objective function in an NP-hard problem.
Numerical experiments show that the proposed algorithm outperforms conventional learning methods in solving NP-hard optimization problems.
arXiv Detail & Related papers (2022-11-08T03:01:45Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Relaxed Gaussian process interpolation: a goal-oriented approach to
Bayesian optimization [0.0]
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling.
The method called relaxed Gaussian process (reGP) provides better predictive distributions in ranges of interest.
It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization.
arXiv Detail & Related papers (2022-06-07T06:26:46Z) - Bayesian Optimization of Function Networks [20.73717187683924]
We consider Bayesian optimization of the output of a network of functions, where each function takes as input the output of its parent nodes, and where the network takes significant time to evaluate.
Our approach delivers greater query efficiency by leveraging information that the former ignores: intermediate output within the network.
We show that our approach dramatically outperforms standard Bayesian optimization methods in several synthetic and real-world problems.
arXiv Detail & Related papers (2021-12-31T05:35:21Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.