Bayesian Optimization with Informative Covariance
- URL: http://arxiv.org/abs/2208.02704v2
- Date: Sat, 1 Apr 2023 17:35:58 GMT
- Title: Bayesian Optimization with Informative Covariance
- Authors: Afonso Eduardo, Michael U. Gutmann
- Abstract summary: We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
- Score: 13.113313427848828
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization is a methodology for global optimization of unknown and
expensive objectives. It combines a surrogate Bayesian regression model with an
acquisition function to decide where to evaluate the objective. Typical
regression models are given by Gaussian processes with stationary covariance
functions. However, these functions are unable to express prior input-dependent
information, including possible locations of the optimum. The ubiquity of
stationary models has led to the common practice of exploiting prior
information via informative mean functions. In this paper, we highlight that
these models can perform poorly, especially in high dimensions. We propose
novel informative covariance functions for optimization, leveraging
nonstationarity to encode preferences for certain regions of the search space
and adaptively promote local exploration during optimization. We demonstrate
that the proposed functions can increase the sample efficiency of Bayesian
optimization in high dimensions, even under weak prior information.
Related papers
- Respecting the limit:Bayesian optimization with a bound on the optimal value [3.004066195320147]
We study the scenario that we have either exact knowledge of the minimum value or a, possibly, lower bound on its value.
We present SlogGP, a new surrogate model that incorporates bound information and adapts the Expected Improvement (EI) acquisition function accordingly.
arXiv Detail & Related papers (2024-11-07T14:27:49Z) - Bayesian Optimisation of Functions on Graphs [33.97967750232631]
We propose a novel Bayesian optimisation framework that optimises over functions defined on generic, large-scale and potentially unknown graphs.
Through the learning of suitable kernels on graphs, our framework has the advantage of adapting to the behaviour of the target function.
The local modelling approach further guarantees the efficiency of our method.
arXiv Detail & Related papers (2023-06-08T15:50:35Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z) - Finding Optimal Points for Expensive Functions Using Adaptive RBF-Based
Surrogate Model Via Uncertainty Quantification [11.486221800371919]
We propose a novel global optimization framework using adaptive Radial Basis Functions (RBF) based surrogate model via uncertainty quantification.
It first employs an RBF-based Bayesian surrogate model to approximate the true function, where the parameters of the RBFs can be adaptively estimated and updated each time a new point is explored.
It then utilizes a model-guided selection criterion to identify a new point from a candidate set for function evaluation.
arXiv Detail & Related papers (2020-01-19T16:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.