Learning Multi-Task Gaussian Process Over Heterogeneous Input Domains
- URL: http://arxiv.org/abs/2202.12636v1
- Date: Fri, 25 Feb 2022 11:55:09 GMT
- Title: Learning Multi-Task Gaussian Process Over Heterogeneous Input Domains
- Authors: Haitao Liu, Kai Wu, Yew-Soon Ong, Xiaomo Jiang, Xiaofang Wang
- Abstract summary: Multi-task Gaussian process (MTGP) is a well-known non-parametric Bayesian model for learning correlated tasks.
This paper presents a novel heterogeneous variational linear model of coregionalization (HSVLMC) model for simultaneously learning the tasks with varied input domains.
- Score: 27.197576157695096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task Gaussian process (MTGP) is a well-known non-parametric Bayesian
model for learning correlated tasks effectively by transferring knowledge
across tasks. But current MTGP models are usually limited to the multi-task
scenario defined in the same input domain, leaving no space for tackling the
practical heterogeneous case, i.e., the features of input domains vary over
tasks. To this end, this paper presents a novel heterogeneous stochastic
variational linear model of coregionalization (HSVLMC) model for simultaneously
learning the tasks with varied input domains. Particularly, we develop the
stochastic variational framework with a Bayesian calibration method that (i)
takes into account the effect of dimensionality reduction raised by domain
mapping in order to achieve effective input alignment; and (ii) employs a
residual modeling strategy to leverage the inductive bias brought by prior
domain mappings for better model inference. Finally, the superiority of the
proposed model against existing LMC models has been extensively verified on
diverse heterogeneous multi-task cases.
Related papers
- Regularized Multi-output Gaussian Convolution Process with Domain Adaptation [0.0]
Multi-output Gaussian process (MGP) has been attracting increasing attention as a transfer learning method to model multiple outputs.
Despite its high flexibility and generality, MGP still faces two critical challenges when applied to transfer learning.
The first one is negative transfer, which occurs when there exists no shared information among the outputs.
The second challenge is the input domain inconsistency, which is commonly studied in transfer learning yet not explored in MGP.
arXiv Detail & Related papers (2024-09-04T14:56:28Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Transfer learning with affine model transformation [18.13383101189326]
This paper presents a general class of transfer learning regression called affine model transfer.
It is shown that the affine model transfer broadly encompasses various existing methods, including the most common procedure based on neural feature extractors.
arXiv Detail & Related papers (2022-10-18T10:50:24Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - Scalable Multi-Task Gaussian Processes with Neural Embedding of
Coregionalization [9.873139480223367]
Multi-task regression attempts to exploit the task similarity in order to achieve knowledge transfer across related tasks for performance improvement.
The linear model of coregionalization (LMC) is a well-known MTGP paradigm which exploits the dependency of tasks through linear combination of several independent and diverse GPs.
We develop the neural embedding of coregionalization that transforms the latent GPs into a high-dimensional latent space to induce rich yet diverse behaviors.
arXiv Detail & Related papers (2021-09-20T01:28:14Z) - T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation [41.356774580308986]
We propose a novel approach named T-SVDNet to address the task of Multi-source Domain Adaptation.
High-order correlations among multiple domains and categories are fully explored so as to better bridge the domain gap.
To avoid negative transfer brought by noisy source data, we propose a novel uncertainty-aware weighting strategy.
arXiv Detail & Related papers (2021-07-30T06:33:05Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.