Regularized Multi-output Gaussian Convolution Process with Domain Adaptation
- URL: http://arxiv.org/abs/2409.02778v1
- Date: Wed, 4 Sep 2024 14:56:28 GMT
- Title: Regularized Multi-output Gaussian Convolution Process with Domain Adaptation
- Authors: Wang Xinming, Wang Chao, Song Xuan, Kirby Levi, Wu Jianguo,
- Abstract summary: Multi-output Gaussian process (MGP) has been attracting increasing attention as a transfer learning method to model multiple outputs.
Despite its high flexibility and generality, MGP still faces two critical challenges when applied to transfer learning.
The first one is negative transfer, which occurs when there exists no shared information among the outputs.
The second challenge is the input domain inconsistency, which is commonly studied in transfer learning yet not explored in MGP.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-output Gaussian process (MGP) has been attracting increasing attention as a transfer learning method to model multiple outputs. Despite its high flexibility and generality, MGP still faces two critical challenges when applied to transfer learning. The first one is negative transfer, which occurs when there exists no shared information among the outputs. The second challenge is the input domain inconsistency, which is commonly studied in transfer learning yet not explored in MGP. In this paper, we propose a regularized MGP modeling framework with domain adaptation to overcome these challenges. More specifically, a sparse covariance matrix of MGP is proposed by using convolution process, where penalization terms are added to adaptively select the most informative outputs for knowledge transfer. To deal with the domain inconsistency, a domain adaptation method is proposed by marginalizing inconsistent features and expanding missing features to align the input domains among different outputs. Statistical properties of the proposed method are provided to guarantee the performance practically and asymptotically. The proposed framework outperforms state-of-the-art benchmarks in comprehensive simulation studies and one real case study of a ceramic manufacturing process. The results demonstrate the effectiveness of our method in dealing with both the negative transfer and the domain inconsistency.
Related papers
- Non-stationary and Sparsely-correlated Multi-output Gaussian Process with Spike-and-Slab Prior [0.0]
Multi-output Gaussian process (MGP) is commonly used as a transfer learning method.
This study proposes a non-stationary MGP model that can capture both the dynamic and sparse correlation among outputs.
arXiv Detail & Related papers (2024-09-05T00:56:25Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Learning Multi-Task Gaussian Process Over Heterogeneous Input Domains [27.197576157695096]
Multi-task Gaussian process (MTGP) is a well-known non-parametric Bayesian model for learning correlated tasks.
This paper presents a novel heterogeneous variational linear model of coregionalization (HSVLMC) model for simultaneously learning the tasks with varied input domains.
arXiv Detail & Related papers (2022-02-25T11:55:09Z) - Maximum Batch Frobenius Norm for Multi-Domain Text Classification [19.393393465837377]
We propose a maximum batch Frobenius norm (MBF) method to boost the feature discriminability for multi-domain text classification.
Experiments on two MDTC benchmarks show that our MBF approach can effectively advance the performance of the state-of-the-art.
arXiv Detail & Related papers (2022-01-29T14:37:56Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Continuous Domain Adaptation with Variational Domain-Agnostic Feature
Replay [78.7472257594881]
Learning in non-stationary environments is one of the biggest challenges in machine learning.
Non-stationarity can be caused by either task drift, or the domain drift.
We propose variational domain-agnostic feature replay, an approach that is composed of three components.
arXiv Detail & Related papers (2020-03-09T19:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.