Relaxed Gaussian process interpolation: a goal-oriented approach to
Bayesian optimization
- URL: http://arxiv.org/abs/2206.03034v1
- Date: Tue, 7 Jun 2022 06:26:46 GMT
- Title: Relaxed Gaussian process interpolation: a goal-oriented approach to
Bayesian optimization
- Authors: S\'ebastien Petit (GdR MASCOT-NUM), Julien Bect (GdR MASCOT-NUM, L2S),
Emmanuel Vazquez (GdR MASCOT-NUM, L2S)
- Abstract summary: This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling.
The method called relaxed Gaussian process (reGP) provides better predictive distributions in ranges of interest.
It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents a new procedure for obtaining predictive distributions in
the context of Gaussian process (GP) modeling, with a relaxation of the
interpolation constraints outside some ranges of interest: the mean of the
predictive distributions no longer necessarily interpolates the observed values
when they are outside ranges of interest, but are simply constrained to remain
outside. This method called relaxed Gaussian process (reGP) interpolation
provides better predictive distributions in ranges of interest, especially in
cases where a stationarity assumption for the GP model is not appropriate. It
can be viewed as a goal-oriented method and becomes particularly interesting in
Bayesian optimization, for example, for the minimization of an objective
function, where good predictive distributions for low function values are
important. When the expected improvement criterion and reGP are used for
sequentially choosing evaluation points, the convergence of the resulting
optimization algorithm is theoretically guaranteed (provided that the function
to be optimized lies in the reproducing kernel Hilbert spaces attached to the
known covariance of the underlying Gaussian process). Experiments indicate that
using reGP instead of stationary GP models in Bayesian optimization is
beneficial.
Related papers
- Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Preferential Bayesian optimisation with Skew Gaussian Processes [0.225596179391365]
We show that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP)
We derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions.
We also show that our framework can be extended to deal with mixed preferential-categorical BO.
arXiv Detail & Related papers (2020-08-15T08:23:17Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Uncertainty Quantification for Bayesian Optimization [12.433600693422235]
We propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, which proceeds by constructing confidence regions of the maximum point (or value) of the objective function.
Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria.
arXiv Detail & Related papers (2020-02-04T22:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.