Shallow and Deep Nonparametric Convolutions for Gaussian Processes
- URL: http://arxiv.org/abs/2206.08972v1
- Date: Fri, 17 Jun 2022 19:03:04 GMT
- Title: Shallow and Deep Nonparametric Convolutions for Gaussian Processes
- Authors: Thomas M. McDonald, Magnus Ross, Michael T. Smith, Mauricio A.
\'Alvarez
- Abstract summary: We introduce a nonparametric process convolution formulation for GPs that alleviates weaknesses by using a functional sampling approach.
We propose a composition of these nonparametric convolutions that serves as an alternative to classic deep GP models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key challenge in the practical application of Gaussian processes (GPs) is
selecting a proper covariance function. The moving average, or process
convolutions, construction of GPs allows some additional flexibility, but still
requires choosing a proper smoothing kernel, which is non-trivial. Previous
approaches have built covariance functions by using GP priors over the
smoothing kernel, and by extension the covariance, as a way to bypass the need
to specify it in advance. However, such models have been limited in several
ways: they are restricted to single dimensional inputs, e.g. time; they only
allow modelling of single outputs and they do not scale to large datasets since
inference is not straightforward. In this paper, we introduce a nonparametric
process convolution formulation for GPs that alleviates these weaknesses by
using a functional sampling approach based on Matheron's rule to perform fast
sampling using interdomain inducing variables. Furthermore, we propose a
composition of these nonparametric convolutions that serves as an alternative
to classic deep GP models, and allows the covariance functions of the
intermediate layers to be inferred from the data. We test the performance of
our model on benchmarks for single output GPs, multiple output GPs and deep GPs
and find that in many cases our approach can provide improvements over standard
GP models.
Related papers
- FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Deep Gaussian Process Emulation using Stochastic Imputation [0.0]
We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using imputation.
Byally imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs.
arXiv Detail & Related papers (2021-07-04T10:46:23Z) - MuyGPs: Scalable Gaussian Process Hyperparameter Estimation Using Local
Cross-Validation [1.2233362977312945]
We present MuyGPs, a novel efficient GP hyper parameter estimation method.
MuyGPs builds upon prior methods that take advantage of the nearest neighbors structure of the data.
We show that our method outperforms all known competitors both in terms of time-to-solution and the root mean squared error of the predictions.
arXiv Detail & Related papers (2021-04-29T18:10:21Z) - Transforming Gaussian Processes With Normalizing Flows [15.886048234706633]
We show that a parametric invertible transformation can be made input-dependent and encode interpretable prior knowledge.
We derive a variational approximation to the resulting inference problem, which is as fast as variational GP regression.
The resulting algorithm's computational and inferential performance is excellent, and we demonstrate this on a range of data sets.
arXiv Detail & Related papers (2020-11-03T09:52:37Z) - Sparse Gaussian Process Variational Autoencoders [24.86751422740643]
Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on points.
We develop the sparse Gaussian processal variation autoencoder (GP-VAE) characterised by the use of partial inference networks for parameterising sparse GP approximations.
arXiv Detail & Related papers (2020-10-20T10:19:56Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.