R^2-HGP: A Double-Regularized Gaussian Process for Heterogeneous Transfer Learning
- URL: http://arxiv.org/abs/2512.10258v1
- Date: Thu, 11 Dec 2025 03:38:20 GMT
- Title: R^2-HGP: A Double-Regularized Gaussian Process for Heterogeneous Transfer Learning
- Authors: Duo Wang, Xinming Wang, Chao Wang, Xiaowei Yue, Jianguo Wu,
- Abstract summary: Multi-output Gaussian process (MGP) models have attracted significant attention for their flexibility and uncertainty-quantification capabilities.<n>They have been widely adopted in multi-source transfer learning scenarios due to their ability to capture inter-task correlations.<n>This paper proposes a Double-Regularized Heterogeneous Gaussian Process framework (R2-HGP) to overcome several challenges in transfer learning.
- Score: 15.278249213859844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-output Gaussian process (MGP) models have attracted significant attention for their flexibility and uncertainty-quantification capabilities, and have been widely adopted in multi-source transfer learning scenarios due to their ability to capture inter-task correlations. However, they still face several challenges in transfer learning. First, the input spaces of the source and target domains are often heterogeneous, which makes direct knowledge transfer difficult. Second, potential prior knowledge and physical information are typically ignored during heterogeneous transfer, hampering the utilization of domain-specific insights and leading to unstable mappings. Third, inappropriate information sharing among target and sources can easily lead to negative transfer. Traditional models fail to address these issues in a unified way. To overcome these limitations, this paper proposes a Double-Regularized Heterogeneous Gaussian Process framework (R^2-HGP). Specifically, a trainable prior probability mapping model is first proposed to align the heterogeneous input domains. The resulting aligned inputs are treated as latent variables, upon which a multi-source transfer GP model is constructed and the entire structure is integrated into a novel conditional variational autoencoder (CVAE) based framework. Physical insights is further incorporated as a regularization term to ensure that the alignment results adhere to known physical knowledge. Next, within the multi-source transfer GP model, a sparsity penalty is imposed on the transfer coefficients, enabling the model to adaptively select the most informative source outputs and suppress negative transfer. Extensive simulations and real-world engineering case studies validate the effectiveness of our R^2-HGP, demonstrating consistent superiority over state-of-the-art benchmarks across diverse evaluation metrics.
Related papers
- Minimax optimal adaptive structured transfer learning through semi-parametric domain-varying coefficient model [9.091986429838117]
We study a multi-source, single-target transfer learning problem under conditional distributional drift.<n>We develop an adaptive transfer learning estimator that selectively borrows strength from informative source domains.
arXiv Detail & Related papers (2026-02-20T03:53:06Z) - Transfer Learning Through Conditional Quantile Matching [3.86972243789112]
We introduce a transfer learning framework for regression that leverages heterogeneous source domains to improve predictive performance in a data-scarce target domain.<n>Our approach learns a conditional generative model separately for each source domain and calibrates the generated responses to the target domain via conditional quantile matching.
arXiv Detail & Related papers (2026-02-02T17:19:55Z) - Wasserstein Transfer Learning [6.602088845993411]
We introduce a novel transfer learning framework for regression models whose outputs are probability distributions residing in the Wasserstein space.<n>We propose an estimator with provable convergence rates, quantifying the impact of domain similarity on transfer efficiency.<n>For cases where the informative subset is unknown, we develop a data-driven transfer learning procedure designed to mitigate negative transfer.
arXiv Detail & Related papers (2025-05-23T02:38:03Z) - Progressive Multi-Source Domain Adaptation for Personalized Facial Expression Recognition [64.37805399216347]
Personalized facial expression recognition (FER) involves adapting a machine learning model using samples from labeled sources and unlabeled target domains.<n>We propose a progressive MSDA approach that gradually introduces information from subjects based on their similarity to the target subject.
arXiv Detail & Related papers (2025-04-05T19:14:51Z) - Transfer Learning through Enhanced Sufficient Representation: Enriching Source Domain Knowledge with Target Data [2.308168896770315]
We introduce a novel method for transfer learning called Transfer learning through Enhanced Sufficient Representation (TESR)<n>Our approach begins by estimating a sufficient and invariant representation from the source domains.<n>This representation is then enhanced with an independent component derived from the target data, ensuring that it is sufficient for the target domain and adaptable to its specific characteristics.
arXiv Detail & Related papers (2025-02-22T13:18:28Z) - Dual-stream Feature Augmentation for Domain Generalization [16.495752769624872]
We propose a Dual-stream Feature Augmentation(DFA) method by constructing some hard features from two perspectives.
Our approach could achieve state-of-the-art performance for domain generalization.
arXiv Detail & Related papers (2024-09-07T03:41:05Z) - Regularized Multi-output Gaussian Convolution Process with Domain Adaptation [0.0]
Multi-output Gaussian process (MGP) has been attracting increasing attention as a transfer learning method to model multiple outputs.
Despite its high flexibility and generality, MGP still faces two critical challenges when applied to transfer learning.
The first one is negative transfer, which occurs when there exists no shared information among the outputs.
The second challenge is the input domain inconsistency, which is commonly studied in transfer learning yet not explored in MGP.
arXiv Detail & Related papers (2024-09-04T14:56:28Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.