Scalable Multi-Task Gaussian Processes with Neural Embedding of
Coregionalization
- URL: http://arxiv.org/abs/2109.09261v1
- Date: Mon, 20 Sep 2021 01:28:14 GMT
- Title: Scalable Multi-Task Gaussian Processes with Neural Embedding of
Coregionalization
- Authors: Haitao Liu, Jiaqi Ding, Xinyu Xie, Xiaomo Jiang, Yusong Zhao, Xiaofang
Wang
- Abstract summary: Multi-task regression attempts to exploit the task similarity in order to achieve knowledge transfer across related tasks for performance improvement.
The linear model of coregionalization (LMC) is a well-known MTGP paradigm which exploits the dependency of tasks through linear combination of several independent and diverse GPs.
We develop the neural embedding of coregionalization that transforms the latent GPs into a high-dimensional latent space to induce rich yet diverse behaviors.
- Score: 9.873139480223367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task regression attempts to exploit the task similarity in order to
achieve knowledge transfer across related tasks for performance improvement.
The application of Gaussian process (GP) in this scenario yields the
non-parametric yet informative Bayesian multi-task regression paradigm.
Multi-task GP (MTGP) provides not only the prediction mean but also the
associated prediction variance to quantify uncertainty, thus gaining popularity
in various scenarios. The linear model of coregionalization (LMC) is a
well-known MTGP paradigm which exploits the dependency of tasks through linear
combination of several independent and diverse GPs. The LMC however suffers
from high model complexity and limited model capability when handling
complicated multi-task cases. To this end, we develop the neural embedding of
coregionalization that transforms the latent GPs into a high-dimensional latent
space to induce rich yet diverse behaviors. Furthermore, we use advanced
variational inference as well as sparse approximation to devise a tight and
compact evidence lower bound (ELBO) for higher quality of scalable model
inference. Extensive numerical experiments have been conducted to verify the
higher prediction quality and better generalization of our model, named NSVLMC,
on various real-world multi-task datasets and the cross-fluid modeling of
unsteady fluidized bed.
Related papers
- Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting [16.640336442849282]
We formulate a multi-task optimization problem as a regularization technique to enable single-task models to leverage multi-task learning information.
We derive a closed-form solution for multi-task optimization in the context of linear models.
arXiv Detail & Related papers (2024-06-14T17:59:25Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Learning Multi-Task Gaussian Process Over Heterogeneous Input Domains [27.197576157695096]
Multi-task Gaussian process (MTGP) is a well-known non-parametric Bayesian model for learning correlated tasks.
This paper presents a novel heterogeneous variational linear model of coregionalization (HSVLMC) model for simultaneously learning the tasks with varied input domains.
arXiv Detail & Related papers (2022-02-25T11:55:09Z) - A Statistics and Deep Learning Hybrid Method for Multivariate Time
Series Forecasting and Mortality Modeling [0.0]
Exponential Smoothing Recurrent Neural Network (ES-RNN) is a hybrid between a statistical forecasting model and a recurrent neural network variant.
ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition.
arXiv Detail & Related papers (2021-12-16T04:44:19Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Modulating Scalable Gaussian Processes for Expressive Statistical
Learning [25.356503463916816]
Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability.
This article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation.
arXiv Detail & Related papers (2020-08-29T06:41:45Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.