Robust Bayesian Target Value Optimization
- URL: http://arxiv.org/abs/2301.04344v1
- Date: Wed, 11 Jan 2023 07:44:59 GMT
- Title: Robust Bayesian Target Value Optimization
- Authors: Johannes G. Hoffer and Sascha Ranftl and Bernhard C. Geiger
- Abstract summary: We consider the problem of finding an input to a black box function such that the output of the black box function is as close as possible to a target value in the sense of the expected squared error.
We derive acquisition functions for common criteria such as the expected improvement, the probability of improvement, and the lower confidence bound, assuming that aleatoric effects are Gaussian with known variance.
- Score: 6.606745253604263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of finding an input to a stochastic black box
function such that the scalar output of the black box function is as close as
possible to a target value in the sense of the expected squared error. While
the optimization of stochastic black boxes is classic in (robust) Bayesian
optimization, the current approaches based on Gaussian processes predominantly
focus either on i) maximization/minimization rather than target value
optimization or ii) on the expectation, but not the variance of the output,
ignoring output variations due to stochasticity in uncontrollable environmental
variables. In this work, we fill this gap and derive acquisition functions for
common criteria such as the expected improvement, the probability of
improvement, and the lower confidence bound, assuming that aleatoric effects
are Gaussian with known variance. Our experiments illustrate that this setting
is compatible with certain extensions of Gaussian processes, and show that the
thus derived acquisition functions can outperform classical Bayesian
optimization even if the latter assumptions are violated. An industrial use
case in billet forging is presented.
Related papers
- Rethinking Approximate Gaussian Inference in Classification [25.021782278452005]
In classification tasks, softmax functions are ubiquitously used to produce predictive probabilities.
We propose a simple change in the learning objective which allows the exact computation of predictives.
We evaluate our approach combined with several approximate Gaussian inference methods on large- and small-scale datasets.
arXiv Detail & Related papers (2025-02-05T17:03:49Z) - Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Covariance-Adaptive Sequential Black-box Optimization for Diffusion Targeted Generation [60.41803046775034]
We show how to perform user-preferred targeted generation via diffusion models with only black-box target scores of users.
Experiments on both numerical test problems and target-guided 3D-molecule generation tasks show the superior performance of our method in achieving better target scores.
arXiv Detail & Related papers (2024-06-02T17:26:27Z) - Bayesian Optimization with Conformal Prediction Sets [44.565812181545645]
Conformal prediction is an uncertainty quantification method with coverage guarantees even for misspecified models.
We propose conformal Bayesian optimization, which directs queries towards regions of search space where the model predictions have guaranteed validity.
In many cases we find that query coverage can be significantly improved without harming sample-efficiency.
arXiv Detail & Related papers (2022-10-22T17:01:05Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Relaxed Gaussian process interpolation: a goal-oriented approach to
Bayesian optimization [0.0]
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling.
The method called relaxed Gaussian process (reGP) provides better predictive distributions in ranges of interest.
It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization.
arXiv Detail & Related papers (2022-06-07T06:26:46Z) - Bayesian Optimization of Risk Measures [7.799648230758491]
We consider Bayesian optimization of objective functions of the form $rho[ F(x, W) ]$, where $F$ is a black-box expensive-to-evaluate function.
We propose a family of novel Bayesian optimization algorithms that exploit the structure of the objective function to substantially improve sampling efficiency.
arXiv Detail & Related papers (2020-07-10T18:20:46Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z) - Uncertainty Quantification for Bayesian Optimization [12.433600693422235]
We propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, which proceeds by constructing confidence regions of the maximum point (or value) of the objective function.
Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
arXiv Detail & Related papers (2020-02-04T22:48:07Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.