Contextual Bayesian optimization with binary outputs
- URL: http://arxiv.org/abs/2111.03447v1
- Date: Fri, 5 Nov 2021 12:09:46 GMT
- Title: Contextual Bayesian optimization with binary outputs
- Authors: Tristan Fauvel and Matthew Chalk
- Abstract summary: In many real-world situations, the objective function can be evaluated in controlled 'contexts' or 'environments' that directly influence the observations.
Here we combine ideas from Bayesian active learning and optimization to efficiently choose the best context and optimization parameter on each iteration.
We demonstrate the performance of our algorithm and illustrate how it can be used to tackle a concrete application in visual psychophysics.
- Score: 0.5076419064097732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization (BO) is an efficient method to optimize expensive
black-box functions. It has been generalized to scenarios where objective
function evaluations return stochastic binary feedback, such as success/failure
in a given test, or preference between different parameter settings. In many
real-world situations, the objective function can be evaluated in controlled
'contexts' or 'environments' that directly influence the observations. For
example, one could directly alter the 'difficulty' of the test that is used to
evaluate a system's performance. With binary feedback, the context determines
the information obtained from each observation. For example, if the test is too
easy/hard, the system will always succeed/fail, yielding uninformative binary
outputs. Here we combine ideas from Bayesian active learning and optimization
to efficiently choose the best context and optimization parameter on each
iteration. We demonstrate the performance of our algorithm and illustrate how
it can be used to tackle a concrete application in visual psychophysics:
efficiently improving patients' vision via corrective lenses, using
psychophysics measurements.
Related papers
- Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - On the development of a Bayesian optimisation framework for complex
unknown systems [11.066706766632578]
This paper studies and compares common Bayesian optimisation algorithms empirically on a range of synthetic test functions.
It investigates the choice of acquisition function and number of training samples, exact calculation of acquisition functions and Monte Carlo based approaches.
arXiv Detail & Related papers (2022-07-19T09:50:34Z) - Efficient Exploration in Binary and Preferential Bayesian Optimization [0.5076419064097732]
We show that it is important for BO algorithms to distinguish between different types of uncertainty.
We propose several new acquisition functions that outperform state-of-the-art BO functions.
arXiv Detail & Related papers (2021-10-18T14:44:34Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z) - Practical Bayesian Optimization of Objectives with Conditioning
Variables [1.0497128347190048]
We consider the more general case where a user is faced with multiple problems that each need to be optimized conditional on a state variable.
Similarity across objectives boosts optimization of each objective in two ways.
We propose a framework for conditional optimization: ConBO.
arXiv Detail & Related papers (2020-02-23T22:06:26Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.