Towards Unsupervised Domain Adaptation for Deep Face Recognition under
Privacy Constraints via Federated Learning
- URL: http://arxiv.org/abs/2105.07606v1
- Date: Mon, 17 May 2021 04:24:25 GMT
- Title: Towards Unsupervised Domain Adaptation for Deep Face Recognition under
Privacy Constraints via Federated Learning
- Authors: Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang,
Shuai Yi
- Abstract summary: We propose a novel unsupervised federated face recognition approach (FedFR)
FedFR improves the performance in the target domain by iteratively aggregating knowledge from the source domain through federated learning.
It protects data privacy by transferring models instead of raw data between domains.
- Score: 33.33475702665153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation has been widely adopted to generalize models
for unlabeled data in a target domain, given labeled data in a source domain,
whose data distributions differ from the target domain. However, existing works
are inapplicable to face recognition under privacy constraints because they
require sharing sensitive face images between two domains. To address this
problem, we propose a novel unsupervised federated face recognition approach
(FedFR). FedFR improves the performance in the target domain by iteratively
aggregating knowledge from the source domain through federated learning. It
protects data privacy by transferring models instead of raw data between
domains. Besides, we propose a new domain constraint loss (DCL) to regularize
source domain training. DCL suppresses the data volume dominance of the source
domain. We also enhance a hierarchical clustering algorithm to predict pseudo
labels for the unlabeled target domain accurately. To this end, FedFR forms an
end-to-end training pipeline: (1) pre-train in the source domain; (2) predict
pseudo labels by clustering in the target domain; (3) conduct
domain-constrained federated learning across two domains. Extensive experiments
and analysis on two newly constructed benchmarks demonstrate the effectiveness
of FedFR. It outperforms the baseline and classic methods in the target domain
by over 4% on the more realistic benchmark. We believe that FedFR will shed
light on applying federated learning to more computer vision tasks under
privacy constraints.
Related papers
- Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation [4.7589762171821715]
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to transfer a model's performance from a labeled source domain to an unlabeled target domain without direct access to source samples.
We introduce a data-efficient, CLIP-powered dual-branch network (CDBN) to address the dual challenges of limited source data and privacy concerns.
CDBN achieves near state-of-the-art performance with far fewer source domain samples than existing methods across 31 transfer tasks on seven datasets.
arXiv Detail & Related papers (2024-10-21T09:25:49Z) - Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Domain Adaptive Few-Shot Open-Set Learning [36.39622440120531]
We propose Domain Adaptive Few-Shot Open Set Recognition (DA-FSOS) and introduce a meta-learning-based architecture named DAFOSNET.
Our training approach ensures that DAFOS-NET can generalize well to new scenarios in the target domain.
We present three benchmarks for DA-FSOS based on the Office-Home, mini-ImageNet/CUB, and DomainNet datasets.
arXiv Detail & Related papers (2023-09-22T12:04:47Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Federated Unsupervised Domain Adaptation for Face Recognition [26.336693850812118]
We propose federated unsupervised domain adaptation for face recognition, FedFR.
For unlabeled data in the target domain, we enhance a clustering algorithm with distance constrain to improve the quality of predicted pseudo labels.
We also propose a new domain constraint loss to regularize source domain training in federated learning.
arXiv Detail & Related papers (2022-04-09T04:02:03Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.