Convergence Guarantees for Gaussian Process Means With Misspecified
Likelihoods and Smoothness
- URL: http://arxiv.org/abs/2001.10818v3
- Date: Tue, 18 May 2021 12:33:03 GMT
- Title: Convergence Guarantees for Gaussian Process Means With Misspecified
Likelihoods and Smoothness
- Authors: George Wynne, Fran\c{c}ois-Xavier Briol and Mark Girolami
- Abstract summary: We study the properties of Gaussian process means when the smoothness of the model and the likelihood function are misspecified.
The answer to this problem is particularly useful since it can inform our choice of model and experimental design.
- Score: 0.7734726150561089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian processes are ubiquitous in machine learning, statistics, and
applied mathematics. They provide a flexible modelling framework for
approximating functions, whilst simultaneously quantifying uncertainty.
However, this is only true when the model is well-specified, which is often not
the case in practice. In this paper, we study the properties of Gaussian
process means when the smoothness of the model and the likelihood function are
misspecified. In this setting, an important theoretical question of practial
relevance is how accurate the Gaussian process approximations will be given the
difficulty of the problem, our model and the extent of the misspecification.
The answer to this problem is particularly useful since it can inform our
choice of model and experimental design. In particular, we describe how the
experimental design and choice of kernel and kernel hyperparameters can be
adapted to alleviate model misspecification.
Related papers
- Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Posterior Contraction Rates for Mat\'ern Gaussian Processes on
Riemannian Manifolds [51.68005047958965]
We show that intrinsic Gaussian processes can achieve better performance in practice.
Our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency.
arXiv Detail & Related papers (2023-09-19T20:30:58Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Learning non-stationary and discontinuous functions using clustering,
classification and Gaussian process modelling [0.0]
We propose a three-stage approach for the approximation of non-smooth functions.
The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates.
The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
arXiv Detail & Related papers (2022-11-30T11:11:56Z) - Uncertainty Disentanglement with Non-stationary Heteroscedastic Gaussian
Processes for Active Learning [10.757942829334057]
We propose a Non-stationary Heteroscedastic Gaussian process model which can be learned with gradient-based techniques.
We demonstrate the interpretability of the proposed model by separating the overall uncertainty into aleatoric (irreducible) and epistemic (model) uncertainty.
arXiv Detail & Related papers (2022-10-20T02:18:19Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - A Kernel-Based Approach for Modelling Gaussian Processes with Functional
Information [0.0]
We use a Gaussian process model to unify the typical finite case with the case of uncountable information.
We discuss this construction in statistical models, including numerical considerations and a proof of concept.
arXiv Detail & Related papers (2022-01-26T15:58:08Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Maximum likelihood estimation and uncertainty quantification for
Gaussian process approximation of deterministic functions [10.319367855067476]
This article provides one of the first theoretical analyses in the context of Gaussian process regression with a noiseless dataset.
We show that the maximum likelihood estimation of the scale parameter alone provides significant adaptation against misspecification of the Gaussian process model.
arXiv Detail & Related papers (2020-01-29T17:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.