Batch Bayesian Optimization via Particle Gradient Flows
- URL: http://arxiv.org/abs/2209.04722v1
- Date: Sat, 10 Sep 2022 18:10:15 GMT
- Title: Batch Bayesian Optimization via Particle Gradient Flows
- Authors: Enrico Crovini, Simon L. Cotter, Konstantinos Zygalakis and Andrew B.
Duncan
- Abstract summary: We show how to find global optima of objective functions which are only available as a black-box or are expensive to evaluate.
We construct a new function based on multipoint expected probability which is over the space of probability measures.
- Score: 0.5735035463793008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian Optimisation (BO) methods seek to find global optima of objective
functions which are only available as a black-box or are expensive to evaluate.
Such methods construct a surrogate model for the objective function,
quantifying the uncertainty in that surrogate through Bayesian inference.
Objective evaluations are sequentially determined by maximising an acquisition
function at each step. However, this ancilliary optimisation problem can be
highly non-trivial to solve, due to the non-convexity of the acquisition
function, particularly in the case of batch Bayesian optimisation, where
multiple points are selected in every step. In this work we reformulate batch
BO as an optimisation problem over the space of probability measures. We
construct a new acquisition function based on multipoint expected improvement
which is convex over the space of probability measures. Practical schemes for
solving this `inner' optimisation problem arise naturally as gradient flows of
this objective function. We demonstrate the efficacy of this new method on
different benchmark functions and compare with state-of-the-art batch BO
methods.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature
over Discrete and Mixed Spaces [6.573393706476156]
We present a novel diversified global optimisation quadrature with arbitrary kernels over discrete and mixed spaces.
Batch quadrature can efficiently solve both tasks by balancing the merits of exploitative Bayesian quadrature.
We show that SOBER outperforms competitive baselinesefficient batch and scalable real-world tasks.
arXiv Detail & Related papers (2023-01-27T16:36:33Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - On the development of a Bayesian optimisation framework for complex
unknown systems [11.066706766632578]
This paper studies and compares common Bayesian optimisation algorithms empirically on a range of synthetic test functions.
It investigates the choice of acquisition function and number of training samples, exact calculation of acquisition functions and Monte Carlo based approaches.
arXiv Detail & Related papers (2022-07-19T09:50:34Z) - Bayesian Optimization for Macro Placement [48.55456716632735]
We develop a novel approach to macro placement using Bayesian optimization (BO) over sequence pairs.
BO is a machine learning technique that uses a probabilistic surrogate model and an acquisition function.
We demonstrate our algorithm on the fixed-outline macro placement problem with the half-perimeter wire length objective.
arXiv Detail & Related papers (2022-07-18T06:17:06Z) - R-MBO: A Multi-surrogate Approach for Preference Incorporation in
Multi-objective Bayesian Optimisation [0.0]
We present an a-priori multi-surrogate approach to incorporate the desirable objective function values as the preferences of a decision-maker in multi-objective BO.
The results and comparison with the existing mono-surrogate approach on benchmark and real-world optimisation problems show the potential of the proposed approach.
arXiv Detail & Related papers (2022-04-27T19:58:26Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Bayesian Optimization of Risk Measures [7.799648230758491]
We consider Bayesian optimization of objective functions of the form $rho[ F(x, W) ]$, where $F$ is a black-box expensive-to-evaluate function.
We propose a family of novel Bayesian optimization algorithms that exploit the structure of the objective function to substantially improve sampling efficiency.
arXiv Detail & Related papers (2020-07-10T18:20:46Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.