Mixtures of Gaussian Processes for regression under multiple prior
distributions
- URL: http://arxiv.org/abs/2104.09185v1
- Date: Mon, 19 Apr 2021 10:19:14 GMT
- Title: Mixtures of Gaussian Processes for regression under multiple prior
distributions
- Authors: Sarem Seitz
- Abstract summary: We extend the idea of Mixture models for Gaussian Process regression in order to work with multiple prior beliefs at once.
We consider the usage of our approach to additionally account for the problem of prior misspecification in functional regression problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When constructing a Bayesian Machine Learning model, we might be faced with
multiple different prior distributions and thus are required to properly
consider them in a sensible manner in our model. While this situation is
reasonably well explored for classical Bayesian Statistics, it appears useful
to develop a corresponding method for complex Machine Learning problems. Given
their underlying Bayesian framework and their widespread popularity, Gaussian
Processes are a good candidate to tackle this task. We therefore extend the
idea of Mixture models for Gaussian Process regression in order to work with
multiple prior beliefs at once - both a analytical regression formula and a
Sparse Variational approach are considered. In addition, we consider the usage
of our approach to additionally account for the problem of prior
misspecification in functional regression problems.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Locking and Quacking: Stacking Bayesian model predictions by log-pooling
and superposition [0.5735035463793007]
We present two novel tools for combining predictions from different models.
These are generalisations of model stacking, but combine posterior densities by log-linear pooling and quantum superposition.
To optimise model weights while avoiding the burden of normalising constants, we investigate the Hyvarinen score of the combined posterior predictions.
arXiv Detail & Related papers (2023-05-12T09:26:26Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Introduction To Gaussian Process Regression In Bayesian Inverse
Problems, With New ResultsOn Experimental Design For Weighted Error Measures [0.0]
This work serves as an introduction to Gaussian process regression, in particular in the context of building surrogate models for inverse problems.
We show that the error between the true and approximate posterior distribution can be bounded by the error between the true and approximate likelihood, measured in the $L2$-norm weighted by the true posterior.
arXiv Detail & Related papers (2023-02-09T09:25:39Z) - Learning non-stationary and discontinuous functions using clustering,
classification and Gaussian process modelling [0.0]
We propose a three-stage approach for the approximation of non-smooth functions.
The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates.
The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
arXiv Detail & Related papers (2022-11-30T11:11:56Z) - RMFGP: Rotated Multi-fidelity Gaussian process with Dimension Reduction
for High-dimensional Uncertainty Quantification [12.826754199680474]
Multi-fidelity modelling enables accurate inference even when only a small set of accurate data is available.
By combining the realizations of the high-fidelity model with one or more low-fidelity models, the multi-fidelity method can make accurate predictions of quantities of interest.
This paper proposes a new dimension reduction framework based on rotated multi-fidelity Gaussian process regression and a Bayesian active learning scheme.
arXiv Detail & Related papers (2022-04-11T01:20:35Z) - Machine Learning for Multi-Output Regression: When should a holistic
multivariate approach be preferred over separate univariate ones? [62.997667081978825]
Tree-based ensembles such as the Random Forest are modern classics among statistical learning methods.
We compare these methods in extensive simulations to help in answering the primary question when to use multivariate ensemble techniques.
arXiv Detail & Related papers (2022-01-14T08:44:25Z) - Robust priors for regularized regression [12.945710636153537]
Penalized regression approaches like ridge regression shrink toward zero but zero weights is usually not a sensible prior.
Inspired by simple and robust decisions humans use, we constructed non-zero priors for penalized regression models.
Models with robust priors had excellent worst-case performance.
arXiv Detail & Related papers (2020-10-06T10:43:14Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.