Minimax optimal adaptive structured transfer learning through semi-parametric domain-varying coefficient model
- URL: http://arxiv.org/abs/2602.17967v1
- Date: Fri, 20 Feb 2026 03:53:06 GMT
- Title: Minimax optimal adaptive structured transfer learning through semi-parametric domain-varying coefficient model
- Authors: Hanxiao Chen, Debarghya Mukherjee,
- Abstract summary: We study a multi-source, single-target transfer learning problem under conditional distributional drift.<n>We develop an adaptive transfer learning estimator that selectively borrows strength from informative source domains.
- Score: 9.091986429838117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning aims to improve inference in a target domain by leveraging information from related source domains, but its effectiveness critically depends on how cross-domain heterogeneity is modeled and controlled. When the conditional mechanism linking covariates and responses varies across domains, indiscriminate information pooling can lead to negative transfer, degrading performance relative to target-only estimation. We study a multi-source, single-target transfer learning problem under conditional distributional drift and propose a semiparametric domain-varying coefficient model (DVCM), in which domain-relatedness is encoded through an observable domain identifier. This framework generalizes classical varying-coefficient models to structured transfer learning and interpolates between invariant and fully heterogeneous regimes. Building on this model, we develop an adaptive transfer learning estimator that selectively borrows strength from informative source domains while provably safeguarding against negative transfer. Our estimator is computationally efficient and easy to implement; we also show that it is minimax rate-optimal and derive its asymptotic distribution, enabling valid uncertainty quantification and hypothesis testing despite data-adaptive pooling and shrinkage. Our results precisely characterize the interplay among domain heterogeneity, the smoothness of the underlying mean function, and the number of source domains and are corroborated by comprehensive numerical experiments and two real-data applications.
Related papers
- Transfer Learning Through Conditional Quantile Matching [3.86972243789112]
We introduce a transfer learning framework for regression that leverages heterogeneous source domains to improve predictive performance in a data-scarce target domain.<n>Our approach learns a conditional generative model separately for each source domain and calibrates the generated responses to the target domain via conditional quantile matching.
arXiv Detail & Related papers (2026-02-02T17:19:55Z) - Tessellation Localized Transfer learning for nonparametric regression [0.764671395172401]
Transfer learning aims to improve performance on a target task by leveraging information from related source tasks.<n>We propose a nonparametric regression transfer learning framework that explicitly models heterogeneity in the source-target relationship.
arXiv Detail & Related papers (2026-01-02T20:58:05Z) - A Turn Toward Better Alignment: Few-Shot Generative Adaptation with Equivariant Feature Rotation [67.2019317630466]
Few-shot image generation aims to effectively adapt a source generative model to a target domain using very few training images.<n>We propose Equivariant Feature Rotation (EFR), a novel adaptation strategy that aligns source and target domains at two complementary levels.<n>Our method significantly enhances the generative performance within the targeted domain.
arXiv Detail & Related papers (2025-12-24T13:48:22Z) - Heterogeneous Multisource Transfer Learning via Model Averaging for Positive-Unlabeled Data [2.030810815519794]
We propose a novel transfer learning framework that integrates information from heterogeneous data sources without direct data sharing.<n>For each source domain type, a tailored logistic regression model is conducted, and knowledge is transferred to the PU target domain through model averaging.<n>Our method outperforms other comparative methods in terms of predictive accuracy and robustness, especially under limited labeled data and heterogeneous environments.
arXiv Detail & Related papers (2025-11-14T03:15:31Z) - Coefficient Shape Transfer Learning for Functional Linear Regression [0.0]
This article develops a novel transfer learning methodology to tackle the challenge of data scarcity in functional linear models.<n>The methodology incorporates samples from the target model (target domain) alongside those from auxiliary models (source domains)<n>It ensures reliable knowledge transfer even when data from different sources differ in magnitude.
arXiv Detail & Related papers (2025-06-13T00:00:43Z) - Partial Transportability for Domain Generalization [56.37032680901525]
Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution.<n>Our contribution is to provide the first general estimation technique for transportability problems.<n>We propose a gradient-based optimization scheme for making scalable inferences in practice.
arXiv Detail & Related papers (2025-03-30T22:06:37Z) - Model-Robust and Adaptive-Optimal Transfer Learning for Tackling Concept Shifts in Nonparametric Regression [7.243632426715939]
We present a transfer learning procedure that is robust against model misspecification while adaptively attaining optimality.<n>We derive the adaptive convergence rates of the excess risk for specifying Gaussian kernels in a prevalent class of hypothesis transfer learning algorithms.
arXiv Detail & Related papers (2025-01-18T20:33:37Z) - Variational Model Perturbation for Source-Free Domain Adaptation [64.98560348412518]
We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework.
We demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains.
arXiv Detail & Related papers (2022-10-19T08:41:19Z) - Transfer learning with affine model transformation [18.13383101189326]
This paper presents a general class of transfer learning regression called affine model transfer.
It is shown that the affine model transfer broadly encompasses various existing methods, including the most common procedure based on neural feature extractors.
arXiv Detail & Related papers (2022-10-18T10:50:24Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.