Collaborative Nonstationary Multivariate Gaussian Process Model
- URL: http://arxiv.org/abs/2106.00719v1
- Date: Tue, 1 Jun 2021 18:25:22 GMT
- Title: Collaborative Nonstationary Multivariate Gaussian Process Model
- Authors: Rui Meng, Herbie Lee, Kristofer Bouchard
- Abstract summary: We propose a novel model called the collaborative nonstationary Gaussian process model(CNMGP)
CNMGP allows us to model data in which outputs do not share a common input set, with a computational complexity independent of the size of the inputs and outputs.
We show that our model generally pro-vides better predictive performance than the state-of-the-art, and also provides estimates of time-varying correlations that differ across outputs.
- Score: 2.362467745272567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, multi-output Gaussian process regression models either do not
model nonstationarity or are associated with severe computational burdens and
storage demands. Nonstationary multi-variate Gaussian process models (NMGP) use
a nonstationary covariance function with an input-dependent linear model of
coregionalisation to jointly model input-dependent correlation, scale, and
smoothness of outputs. Variational sparse approximation relies on inducing
points to enable scalable computations. Here, we take the best of both worlds:
considering an inducing variable framework on the underlying latent functions
in NMGP, we propose a novel model called the collaborative nonstationary
Gaussian process model(CNMGP). For CNMGP, we derive computationally tractable
variational bounds amenable to doubly stochastic variational inference.
Together, this allows us to model data in which outputs do not share a common
input set, with a computational complexity that is independent of the size of
the inputs and outputs. We illustrate the performance of our method on
synthetic data and three real datasets and show that our model generally
pro-vides better predictive performance than the state-of-the-art, and also
provides estimates of time-varying correlations that differ across outputs.
Related papers
- Scalable Multi-Output Gaussian Processes with Stochastic Variational Inference [2.1249213103048414]
The Latent Variable MOGP (LV-MOGP) allows efficient generalization to new outputs with few data points.
complexity in LV-MOGP grows linearly with the number of outputs.
We propose a variational inference approach for the LV-MOGP that allows mini-batches for both inputs and outputs.
arXiv Detail & Related papers (2024-07-02T17:53:56Z) - Preventing Model Collapse in Gaussian Process Latent Variable Models [11.45681373843122]
This paper theoretically examines the impact of projection variance on model collapse through the lens of a linear FourierVM.
We tackle model collapse due to inadequate kernel flexibility by integrating the spectral mixture (SM) kernel and a differentiable random feature (RFF) kernel approximation.
The proposedVM, named advisedRFLVM, is evaluated across diverse datasets and consistently outperforms various competing models.
arXiv Detail & Related papers (2024-04-02T06:58:41Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Scalable mixed-domain Gaussian process modeling and model reduction for longitudinal data [5.00301731167245]
We derive a basis function approximation scheme for mixed-domain covariance functions.
We show that we can approximate the exact GP model accurately in a fraction of the runtime.
We also demonstrate a scalable model reduction workflow for obtaining smaller and more interpretable models.
arXiv Detail & Related papers (2021-11-03T04:47:37Z) - On MCMC for variationally sparse Gaussian processes: A pseudo-marginal
approach [0.76146285961466]
Gaussian processes (GPs) are frequently used in machine learning and statistics to construct powerful models.
We propose a pseudo-marginal (PM) scheme that offers exact inference as well as computational gains through doubly estimators for the likelihood and large datasets.
arXiv Detail & Related papers (2021-03-04T20:48:29Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.