Inducing Point Allocation for Sparse Gaussian Processes in
High-Throughput Bayesian Optimisation
- URL: http://arxiv.org/abs/2301.10123v1
- Date: Tue, 24 Jan 2023 16:43:29 GMT
- Title: Inducing Point Allocation for Sparse Gaussian Processes in
High-Throughput Bayesian Optimisation
- Authors: Henry B. Moss, Sebastian W. Ober and Victor Picheny
- Abstract summary: We show that existing methods for allocating inducing points severely hamper optimisation performance.
By exploiting the quality-diversity decomposition of Determinantal Point Processes, we propose the first inducing point allocation strategy for use in BO.
- Score: 9.732863739456036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse Gaussian Processes are a key component of high-throughput Bayesian
Optimisation (BO) loops; however, we show that existing methods for allocating
their inducing points severely hamper optimisation performance. By exploiting
the quality-diversity decomposition of Determinantal Point Processes, we
propose the first inducing point allocation strategy designed specifically for
use in BO. Unlike existing methods which seek only to reduce global uncertainty
in the objective function, our approach provides the local high-fidelity
modelling of promising regions required for precise optimisation. More
generally, we demonstrate that our proposed framework provides a flexible way
to allocate modelling capacity in sparse models and so is suitable broad range
of downstream sequential decision making tasks.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - An Adaptive Dimension Reduction Estimation Method for High-dimensional
Bayesian Optimization [6.79843988450982]
We propose a two-step optimization framework to extend BO to high-dimensional settings.
Our algorithm offers the flexibility to operate these steps either concurrently or in sequence.
Numerical experiments validate the efficacy of our method in challenging scenarios.
arXiv Detail & Related papers (2024-03-08T16:21:08Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Information-theoretic Inducing Point Placement for High-throughput
Bayesian Optimisation [9.732863739456036]
We propose a novel inducing point design that uses a principled information-theoretic criterion to select inducing points.
By choosing inducing points to maximally reduce both global uncertainty and uncertainty in the maximum value of the objective function, we build surrogate models able to support high-precision high- throughput BO.
arXiv Detail & Related papers (2022-06-06T08:56:56Z) - Risk-averse Heteroscedastic Bayesian Optimization [45.12421486836736]
We propose a novel risk-averse heteroscedastic Bayesian optimization algorithm (RAHBO)
RAHBO aims to identify a solution with high return and low noise variance, while learning the noise distribution on the fly.
We provide a robust rule to report the final decision point for applications where only a single solution must be identified.
arXiv Detail & Related papers (2021-11-05T17:38:34Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Uncertainty Quantification for Bayesian Optimization [12.433600693422235]
We propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, which proceeds by constructing confidence regions of the maximum point (or value) of the objective function.
Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
arXiv Detail & Related papers (2020-02-04T22:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.