SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature
over Discrete and Mixed Spaces
- URL: http://arxiv.org/abs/2301.11832v4
- Date: Wed, 5 Jul 2023 05:09:15 GMT
- Title: SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature
over Discrete and Mixed Spaces
- Authors: Masaki Adachi, Satoshi Hayakawa, Saad Hamid, Martin J{\o}rgensen,
Harald Oberhauser, Micheal A. Osborne
- Abstract summary: We present a novel diversified global optimisation quadrature with arbitrary kernels over discrete and mixed spaces.
Batch quadrature can efficiently solve both tasks by balancing the merits of exploitative Bayesian quadrature.
We show that SOBER outperforms competitive baselinesefficient batch and scalable real-world tasks.
- Score: 6.573393706476156
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Batch Bayesian optimisation and Bayesian quadrature have been shown to be
sample-efficient methods of performing optimisation and quadrature where
expensive-to-evaluate objective functions can be queried in parallel. However,
current methods do not scale to large batch sizes -- a frequent desideratum in
practice (e.g. drug discovery or simulation-based inference). We present a
novel algorithm, SOBER, which permits scalable and diversified batch global
optimisation and quadrature with arbitrary acquisition functions and kernels
over discrete and mixed spaces. The key to our approach is to reformulate batch
selection for global optimisation as a quadrature problem, which relaxes
acquisition function maximisation (non-convex) to kernel recombination
(convex). Bridging global optimisation and quadrature can efficiently solve
both tasks by balancing the merits of exploitative Bayesian optimisation and
explorative Bayesian quadrature. We show that SOBER outperforms 11 competitive
baselines on 12 synthetic and diverse real-world tasks.
Related papers
- Batched Bayesian optimization with correlated candidate uncertainties [44.38372821900645]
We propose an acquisition strategy for discrete optimization motivated by pure exploitation, qPO (multipoint of Optimality)
We apply our method to the model-guided exploration of large chemical libraries and provide empirical evidence that it performs better than or on par with state-of-the-art methods in batched Bayesian optimization.
arXiv Detail & Related papers (2024-10-08T20:13:12Z) - A Quadrature Approach for General-Purpose Batch Bayesian Optimization via Probabilistic Lifting [29.476428264123644]
We introduce a versatile and modular framework for batch Bayesian optimisation via probabilistic lifting with kernel quadrature, called SOBER, which we present as a Python library based on GPyTorch/BoTorch.
Our framework offers the following unique benefits: (1) Versatility in downstream tasks under a unified approach.
(2) A gradient-free sampler, which does not require the gradient of acquisition functions, offering domain-agnostic sampling (e.g., discrete and mixed variables, non-Euclidean space)
arXiv Detail & Related papers (2024-04-18T14:30:46Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Batch Bayesian Optimization via Particle Gradient Flows [0.5735035463793008]
We show how to find global optima of objective functions which are only available as a black-box or are expensive to evaluate.
We construct a new function based on multipoint expected probability which is over the space of probability measures.
arXiv Detail & Related papers (2022-09-10T18:10:15Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - LinEasyBO: Scalable Bayesian Optimization Approach for Analog Circuit
Synthesis via One-Dimensional Subspaces [11.64233949999656]
We propose a fast and robust Bayesian optimization approach via one-dimensional subspaces for analog circuit synthesis.
Our proposed algorithm can accelerate the optimization procedure by up to 9x and 38x compared to LP-EI and REMBOpBO respectively when the batch size is 15.
arXiv Detail & Related papers (2021-09-01T21:25:25Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Combinatorial Bayesian Optimization with Random Mapping Functions to
Convex Polytopes [43.19936635161588]
We present a method for Bayesian optimization in a space which can operate well in a large space.
Our algorithm shows satisfactory performance compared to existing methods.
arXiv Detail & Related papers (2020-11-26T02:22:41Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.