Risk-averse Heteroscedastic Bayesian Optimization
- URL: http://arxiv.org/abs/2111.03637v1
- Date: Fri, 5 Nov 2021 17:38:34 GMT
- Title: Risk-averse Heteroscedastic Bayesian Optimization
- Authors: Anastasiia Makarova, Ilnura Usmanova, Ilija Bogunovic, Andreas Krause
- Abstract summary: We propose a novel risk-averse heteroscedastic Bayesian optimization algorithm (RAHBO)
RAHBO aims to identify a solution with high return and low noise variance, while learning the noise distribution on the fly.
We provide a robust rule to report the final decision point for applications where only a single solution must be identified.
- Score: 45.12421486836736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many black-box optimization tasks arising in high-stakes applications require
risk-averse decisions. The standard Bayesian optimization (BO) paradigm,
however, optimizes the expected value only. We generalize BO to trade mean and
input-dependent variance of the objective, both of which we assume to be
unknown a priori. In particular, we propose a novel risk-averse heteroscedastic
Bayesian optimization algorithm (RAHBO) that aims to identify a solution with
high return and low noise variance, while learning the noise distribution on
the fly. To this end, we model both expectation and variance as (unknown) RKHS
functions, and propose a novel risk-aware acquisition function. We bound the
regret for our approach and provide a robust rule to report the final decision
point for applications where only a single solution must be identified. We
demonstrate the effectiveness of RAHBO on synthetic benchmark functions and
hyperparameter tuning tasks.
Related papers
- Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Risk-Controlling Model Selection via Guided Bayesian Optimization [35.53469358591976]
We find a configuration that adheres to user-specified limits on certain risks while being useful with respect to other conflicting metrics.
Our method identifies a set of optimal configurations residing in a designated region of interest.
We demonstrate the effectiveness of our approach on a range of tasks with multiple desiderata, including low error rates, equitable predictions, handling spurious correlations, managing rate and distortion in generative models, and reducing computational costs.
arXiv Detail & Related papers (2023-12-04T07:29:44Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Inducing Point Allocation for Sparse Gaussian Processes in
High-Throughput Bayesian Optimisation [9.732863739456036]
We show that existing methods for allocating inducing points severely hamper optimisation performance.
By exploiting the quality-diversity decomposition of Determinantal Point Processes, we propose the first inducing point allocation strategy for use in BO.
arXiv Detail & Related papers (2023-01-24T16:43:29Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Robust Multi-Objective Bayesian Optimization Under Input Noise [27.603887040015888]
In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.
In this work, we propose the first multi-objective BO method that is robust to input noise.
arXiv Detail & Related papers (2022-02-15T16:33:48Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Bayesian Quantile and Expectile Optimisation [3.3878745408530833]
We propose new variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic noise settings.
Our strategies can directly optimise for the quantile and expectile, without requiring replicating observations or assuming a parametric form for the noise.
As illustrated in the experimental section, the proposed approach clearly outperforms the state of the art in the heteroscedastic, non-Gaussian case.
arXiv Detail & Related papers (2020-01-12T20:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.