UFDA: Universal Federated Domain Adaptation with Practical Assumptions
- URL: http://arxiv.org/abs/2311.15570v2
- Date: Tue, 19 Dec 2023 07:12:21 GMT
- Title: UFDA: Universal Federated Domain Adaptation with Practical Assumptions
- Authors: Xinhui Liu, Zhenghao Chen, Luping Zhou, Dong Xu, Wei Xi, Gairui Bai,
Yihan Zhao, and Jizhong Zhao
- Abstract summary: This paper studies a more practical scenario named Universal Federated Domain Adaptation (UFDA)
It only requires the black-box model and the label set information of each source domain.
We propose a corresponding methodology called Hot-Learning with Contrastive Label Disambiguation (HCLD)
- Score: 33.06684706053823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional Federated Domain Adaptation (FDA) approaches usually demand an
abundance of assumptions, which makes them significantly less feasible for
real-world situations and introduces security hazards. This paper relaxes the
assumptions from previous FDAs and studies a more practical scenario named
Universal Federated Domain Adaptation (UFDA). It only requires the black-box
model and the label set information of each source domain, while the label sets
of different source domains could be inconsistent, and the target-domain label
set is totally blind. Towards a more effective solution for our newly proposed
UFDA scenario, we propose a corresponding methodology called Hot-Learning with
Contrastive Label Disambiguation (HCLD). It particularly tackles UFDA's domain
shifts and category gaps problems by using one-hot outputs from the black-box
models of various source domains. Moreover, to better distinguish the shared
and unknown classes, we further present a cluster-level strategy named
Mutual-Voting Decision (MVD) to extract robust consensus knowledge across peer
classes from both source and target domains. Extensive experiments on three
benchmark datasets demonstrate that our method achieves comparable performance
for our UFDA scenario with much fewer assumptions, compared to previous
methodologies with comprehensive additional assumptions.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - A Comprehensive Survey on Source-free Domain Adaptation [69.17622123344327]
The research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years.
We provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme.
We compare the results of more than 30 representative SFDA methods on three popular classification benchmarks.
arXiv Detail & Related papers (2023-02-23T06:32:09Z) - Source-Free Unsupervised Domain Adaptation: A Survey [32.48017861767467]
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems.
Many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain.
This paper provides a timely and systematic literature review of existing SFUDA approaches from a technical perspective.
arXiv Detail & Related papers (2022-12-31T18:44:45Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Universal Multi-Source Domain Adaptation [17.045689789877926]
Unsupervised domain adaptation enables intelligent models to transfer knowledge from a labeled source domain to a similar but unlabeled target domain.
Recent study reveals that knowledge can be transferred from one source domain to another unknown target domain, called Universal Domain Adaptation (UDA)
We propose a universal multi-source adaptation network (UMAN) to solve the domain adaptation problem without increasing the complexity of the model.
arXiv Detail & Related papers (2020-11-05T00:20:38Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.