Skew Gaussian Processes for Classification
- URL: http://arxiv.org/abs/2005.12987v1
- Date: Tue, 26 May 2020 19:13:03 GMT
- Title: Skew Gaussian Processes for Classification
- Authors: Alessio Benavoli and Dario Azzimonti and Dario Piga
- Abstract summary: We propose Skew-Gaussian processes (SkewGPs) as a non-parametric prior over functions.
SkewGPs inherit all good properties of GPs and increase their flexibility by allowing asymmetry in the probabilistic model.
- Score: 0.225596179391365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian processes (GPs) are distributions over functions, which provide a
Bayesian nonparametric approach to regression and classification. In spite of
their success, GPs have limited use in some applications, for example, in some
cases a symmetric distribution with respect to its mean is an unreasonable
model. This implies, for instance, that the mean and the median coincide, while
the mean and median in an asymmetric (skewed) distribution can be different
numbers. In this paper, we propose Skew-Gaussian processes (SkewGPs) as a
non-parametric prior over functions. A SkewGP extends the multivariate Unified
Skew-Normal distribution over finite dimensional vectors to a stochastic
processes. The SkewGP class of distributions includes GPs and, therefore,
SkewGPs inherit all good properties of GPs and increase their flexibility by
allowing asymmetry in the probabilistic model. By exploiting the fact that
SkewGP and probit likelihood are conjugate model, we derive closed form
expressions for the marginal likelihood and predictive distribution of this new
nonparametric classifier. We verify empirically that the proposed SkewGP
classifier provides a better performance than a GP classifier based on either
Laplace's method or Expectation Propagation.
Related papers
- Gaussian Process Regression with Soft Inequality and Monotonicity Constraints [0.0]
We introduce a new GP method that enforces the physical constraints in a probabilistic manner.
This GP model is trained by the quantum-inspired Hamiltonian Monte Carlo (QHMC)
arXiv Detail & Related papers (2024-04-03T17:09:25Z) - Shallow and Deep Nonparametric Convolutions for Gaussian Processes [0.0]
We introduce a nonparametric process convolution formulation for GPs that alleviates weaknesses by using a functional sampling approach.
We propose a composition of these nonparametric convolutions that serves as an alternative to classic deep GP models.
arXiv Detail & Related papers (2022-06-17T19:03:04Z) - B\'ezier Curve Gaussian Processes [8.11969931278838]
This paper proposes a new probabilistic sequence model building on probabilistic B'ezier curves.
Combined with a Mixture Density network, Bayesian conditional inference can be performed without the need for mean field variational approximation.
The model is used for pedestrian trajectory prediction, where a generated prediction also serves as a GP prior.
arXiv Detail & Related papers (2022-05-03T19:49:57Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - On MCMC for variationally sparse Gaussian processes: A pseudo-marginal
approach [0.76146285961466]
Gaussian processes (GPs) are frequently used in machine learning and statistics to construct powerful models.
We propose a pseudo-marginal (PM) scheme that offers exact inference as well as computational gains through doubly estimators for the likelihood and large datasets.
arXiv Detail & Related papers (2021-03-04T20:48:29Z) - A unified framework for closed-form nonparametric regression,
classification, preference and mixed problems with Skew Gaussian Processes [1.0742675209112622]
Skew-Gaussian processes (SkewGPs) extend the Unified Skew-Normal distributions over finite dimensional vectors to distribution over functions.
We show that SkewGP and probit likelihood are conjugate, which allows us to compute the exact posterior for non-parametric binary classification and preference learning.
arXiv Detail & Related papers (2020-12-12T15:58:16Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.