A Framework for Interdomain and Multioutput Gaussian Processes
- URL: http://arxiv.org/abs/2003.01115v1
- Date: Mon, 2 Mar 2020 16:24:59 GMT
- Title: A Framework for Interdomain and Multioutput Gaussian Processes
- Authors: Mark van der Wilk, Vincent Dutordoir, ST John, Artem Artemev, Vincent
Adam, and James Hensman
- Abstract summary: We present a mathematical and software framework for scalable approximate inference in GPs.
Our framework, implemented in GPflow, provides a unified interface for many existing multioutput models.
- Score: 22.62911488724047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One obstacle to the use of Gaussian processes (GPs) in large-scale problems,
and as a component in deep learning system, is the need for bespoke derivations
and implementations for small variations in the model or inference. In order to
improve the utility of GPs we need a modular system that allows rapid
implementation and testing, as seen in the neural network community. We present
a mathematical and software framework for scalable approximate inference in
GPs, which combines interdomain approximations and multiple outputs. Our
framework, implemented in GPflow, provides a unified interface for many
existing multioutput models, as well as more recent convolutional structures.
This simplifies the creation of deep models with GPs, and we hope that this
work will encourage more interest in this approach.
Related papers
- Performance Optimization using Multimodal Modeling and Heterogeneous GNN [1.304892050913381]
We propose a technique for tuning parallel code regions that is general enough to be adapted to multiple tasks.
In this paper, we analyze IR-based programming models to make task-specific performance optimizations.
Our experiments show that this multimodal learning based approach outperforms the state-of-the-art in all experiments.
arXiv Detail & Related papers (2023-04-25T04:27:43Z) - Shallow and Deep Nonparametric Convolutions for Gaussian Processes [0.0]
We introduce a nonparametric process convolution formulation for GPs that alleviates weaknesses by using a functional sampling approach.
We propose a composition of these nonparametric convolutions that serves as an alternative to classic deep GP models.
arXiv Detail & Related papers (2022-06-17T19:03:04Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Deep Gaussian Process Emulation using Stochastic Imputation [0.0]
We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using imputation.
Byally imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs.
arXiv Detail & Related papers (2021-07-04T10:46:23Z) - GPflux: A Library for Deep Gaussian Processes [31.207566616050574]
GPflux is a Python library for Bayesian deep learning with a strong emphasis on deep Gaussian processes (DGPs)
It is compatible with and built on top of the Keras deep learning eco-system.
GPflux relies on GPflow for most of its GP objects and operations, which makes it an efficient, modular and extendable library.
arXiv Detail & Related papers (2021-04-12T17:41:18Z) - Deep Gaussian Processes for Few-Shot Segmentation [66.08463078545306]
Few-shot segmentation is a challenging task, requiring the extraction of a generalizable representation from only a few annotated samples.
We propose a few-shot learner formulation based on Gaussian process (GP) regression.
Our approach sets a new state-of-the-art for 5-shot segmentation, with mIoU scores of 68.1 and 49.8 on PASCAL-5i and COCO-20i, respectively.
arXiv Detail & Related papers (2021-03-30T17:56:32Z) - GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning [23.83961717568121]
GP-Tree is a novel method for multi-class classification with Gaussian processes and deep kernel learning.
We develop a tree-based hierarchical model in which each internal node fits a GP to the data.
Our method scales well with both the number of classes and data size.
arXiv Detail & Related papers (2021-02-15T22:16:27Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.