Multi-source domain adaptation for regression
- URL: http://arxiv.org/abs/2312.05460v1
- Date: Sat, 9 Dec 2023 04:09:37 GMT
- Title: Multi-source domain adaptation for regression
- Authors: Yujie Wu, Giovanni Parmigiani and Boyu Ren
- Abstract summary: Multi-source domain adaptation (DA) aims at leveraging information from more than one source domain to make predictions in a target domain.
We extend a flexible single-source DA algorithm for classification through outcome-coarsening to enable its application to regression problems.
We then augment our single-source DA algorithm for regression with ensemble learning to achieve multi-source DA.
- Score: 2.8648412780872845
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-source domain adaptation (DA) aims at leveraging information from more
than one source domain to make predictions in a target domain, where different
domains may have different data distributions. Most existing methods for
multi-source DA focus on classification problems while there is only limited
investigation in the regression settings. In this paper, we fill in this gap
through a two-step procedure. First, we extend a flexible single-source DA
algorithm for classification through outcome-coarsening to enable its
application to regression problems. We then augment our single-source DA
algorithm for regression with ensemble learning to achieve multi-source DA. We
consider three learning paradigms in the ensemble algorithm, which combines
linearly the target-adapted learners trained with each source domain: (i) a
multi-source stacking algorithm to obtain the ensemble weights; (ii) a
similarity-based weighting where the weights reflect the quality of DA of each
target-adapted learner; and (iii) a combination of the stacking and similarity
weights. We illustrate the performance of our algorithms with simulations and a
data application where the goal is to predict High-density lipoprotein (HDL)
cholesterol levels using gut microbiome. We observe a consistent improvement in
prediction performance of our multi-source DA algorithm over the routinely used
methods in all these scenarios.
Related papers
- Two-Timescale Model Caching and Resource Allocation for Edge-Enabled AI-Generated Content Services [55.0337199834612]
Generative AI (GenAI) has emerged as a transformative technology, enabling customized and personalized AI-generated content (AIGC) services.
These services require executing GenAI models with billions of parameters, posing significant obstacles to resource-limited wireless edge.
We introduce the formulation of joint model caching and resource allocation for AIGC services to balance a trade-off between AIGC quality and latency metrics.
arXiv Detail & Related papers (2024-11-03T07:01:13Z) - A Weight-aware-based Multi-source Unsupervised Domain Adaptation Method for Human Motion Intention Recognition [11.78805948637625]
unsupervised domain adaptation (UDA) method has become an effective way to this problem.
The labeled data are collected from multiple source subjects that might be different not only from the target subject but also from each other.
This paper develops a novel theory and algorithm for UDA to recognize HMI, where the margin disparity discrepancy (MDD) is extended to multi-source UDA theory.
The developed multi-source UDA theory is theoretical and the generalization error on target subject is guaranteed.
arXiv Detail & Related papers (2024-04-19T03:49:54Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Algorithm-Dependent Bounds for Representation Learning of Multi-Source
Domain Adaptation [7.6249291891777915]
We use information-theoretic tools to derive a novel analysis of Multi-source Domain Adaptation (MDA) from the representation learning perspective.
We propose a novel deep MDA algorithm, implicitly addressing the target shift through joint alignment.
The proposed algorithm has comparable performance to the state-of-the-art on target-shifted MDA benchmark with improved memory efficiency.
arXiv Detail & Related papers (2023-04-04T18:32:20Z) - Domain Adaptation Principal Component Analysis: base linear method for
learning with out-of-distribution data [55.41644538483948]
Domain adaptation is a popular paradigm in modern machine learning.
We present a method called Domain Adaptation Principal Component Analysis (DAPCA)
DAPCA finds a linear reduced data representation useful for solving the domain adaptation task.
arXiv Detail & Related papers (2022-08-28T21:10:56Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Multi-Source domain adaptation via supervised contrastive learning and
confident consistency regularization [0.0]
Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to learn a model from several labeled source domains.
We propose Contrastive Multi-Source Domain Adaptation (CMSDA) for multi-source UDA that addresses this limitation.
arXiv Detail & Related papers (2021-06-30T14:39:15Z) - Multi-resource allocation for federated settings: A non-homogeneous
Markov chain model [2.552459629685159]
In a federated setting, agents coordinate with a central agent or a server to solve an optimization problem in which agents do not share their information with each other.
We describe how the basic additive-increase multiplicative-decrease (AIMD) algorithm can be modified in a straightforward manner to solve a class of optimization problems for federated settings for a single shared resource with no inter-agent communication.
We extend the single-resource algorithm to multiple heterogeneous shared resources that emerge in smart cities, sharing economy, and many other applications.
arXiv Detail & Related papers (2021-04-26T19:10:00Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Online Meta-Learning for Multi-Source and Semi-Supervised Domain
Adaptation [4.1799778475823315]
We propose a framework to enhance performance by meta-learning the initial conditions of existing DA algorithms.
We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA)
We achieve state of the art results on several DA benchmarks including the largest scale DomainNet.
arXiv Detail & Related papers (2020-04-09T07:48:22Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.