Transport Gaussian Processes for Regression
- URL: http://arxiv.org/abs/2001.11473v1
- Date: Thu, 30 Jan 2020 17:44:21 GMT
- Title: Transport Gaussian Processes for Regression
- Authors: Gonzalo Rios
- Abstract summary: We propose a methodology to construct processes, which include GPs, warped GPs, Student-t processes and several others.
Our approach is inspired by layers-based models, where each proposed layer changes a specific property over the generated process.
We validate the proposed model through experiments with real-world data.
- Score: 0.22843885788439797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process (GP) priors are non-parametric generative models with
appealing modelling properties for Bayesian inference: they can model
non-linear relationships through noisy observations, have closed-form
expressions for training and inference, and are governed by interpretable
hyperparameters. However, GP models rely on Gaussianity, an assumption that
does not hold in several real-world scenarios, e.g., when observations are
bounded or have extreme-value dependencies, a natural phenomenon in physics,
finance and social sciences. Although beyond-Gaussian stochastic processes have
caught the attention of the GP community, a principled definition and rigorous
treatment is still lacking. In this regard, we propose a methodology to
construct stochastic processes, which include GPs, warped GPs, Student-t
processes and several others under a single unified approach. We also provide
formulas and algorithms for training and inference of the proposed models in
the regression problem. Our approach is inspired by layers-based models, where
each proposed layer changes a specific property over the generated stochastic
process. That, in turn, allows us to push-forward a standard Gaussian white
noise prior towards other more expressive stochastic processes, for which
marginals and copulas need not be Gaussian, while retaining the appealing
properties of GPs. We validate the proposed model through experiments with
real-world data.
Related papers
- Robust Gaussian Processes via Relevance Pursuit [17.39376866275623]
We propose and study a GP model that achieves robustness against sparse outliers by inferring data-point-specific noise levels.
We show, surprisingly, that the model can be parameterized such that the associated log marginal likelihood is strongly concave in the data-point-specific noise variances.
arXiv Detail & Related papers (2024-10-31T17:59:56Z) - Amortized Variational Inference for Deep Gaussian Processes [0.0]
Deep Gaussian processes (DGPs) are multilayer generalizations of Gaussian processes (GPs)
We introduce amortized variational inference for DGPs, which learns an inference function that maps each observation to variational parameters.
Our method performs similarly or better than previous approaches at less computational cost.
arXiv Detail & Related papers (2024-09-18T20:23:27Z) - Polynomial Chaos Expanded Gaussian Process [2.287415292857564]
In complex and unknown processes, global models are initially generated over the entire experimental space.
This study addresses the need for models that effectively represent both global and local experimental spaces.
arXiv Detail & Related papers (2024-05-02T07:11:05Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Non-Gaussian Process Regression [0.0]
We extend the GP framework into a new class of time-changed GPs that allow for straightforward modelling of heavy-tailed non-Gaussian behaviours.
We present Markov chain Monte Carlo inference procedures for this model and demonstrate the potential benefits.
arXiv Detail & Related papers (2022-09-07T13:08:22Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - On MCMC for variationally sparse Gaussian processes: A pseudo-marginal
approach [0.76146285961466]
Gaussian processes (GPs) are frequently used in machine learning and statistics to construct powerful models.
We propose a pseudo-marginal (PM) scheme that offers exact inference as well as computational gains through doubly estimators for the likelihood and large datasets.
arXiv Detail & Related papers (2021-03-04T20:48:29Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.