Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating
- URL: http://arxiv.org/abs/2203.02689v2
- Date: Tue, 8 Mar 2022 12:04:03 GMT
- Title: Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating
- Authors: Fengxiang Yang, Zhun Zhong, Zhiming Luo, Shaozi Li, Nicu Sebe
- Abstract summary: We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
- Score: 88.77196261300699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the problem of federated domain generalization
(FedDG) for person re-identification (re-ID), which aims to learn a generalized
model with multiple decentralized labeled source domains. An empirical method
(FedAvg) trains local models individually and averages them to obtain the
global model for further local fine-tuning or deploying in unseen target
domains. One drawback of FedAvg is neglecting the data distributions of other
clients during local training, making the local model overfit local data and
producing a poorly-generalized global model. To solve this problem, we propose
a novel method, called "Domain and Feature Hallucinating (DFH)", to produce
diverse features for learning generalized local and global models.
Specifically, after each model aggregation process, we share the Domain-level
Feature Statistics (DFS) among different clients without violating data
privacy. During local training, the DFS are used to synthesize novel domain
statistics with the proposed domain hallucinating, which is achieved by
re-weighting DFS with random weights. Then, we propose feature hallucinating to
diversify local features by scaling and shifting them to the distribution of
the obtained novel domain. The synthesized novel features retain the original
pair-wise similarities, enabling us to utilize them to optimize the model in a
supervised manner. Extensive experiments verify that the proposed DFH can
effectively improve the generalization ability of the global model. Our method
achieves the state-of-the-art performance for FedDG on four large-scale re-ID
benchmarks.
Related papers
- Feature Diversification and Adaptation for Federated Domain Generalization [27.646565383214227]
In real-world applications, local clients often operate within their limited domains, leading to a domain shift' across clients.
We introduce the concept of federated feature diversification, which helps local models learn client-invariant representations while preserving privacy.
Our resultant global model shows robust performance on unseen test domain data.
arXiv Detail & Related papers (2024-07-11T07:45:10Z) - FDS: Feedback-guided Domain Synthesis with Multi-Source Conditional Diffusion Models for Domain Generalization [19.0284321951354]
Domain Generalization techniques aim to enhance model robustness by simulating novel data distributions during training.
We propose FDS, Feedback-guided Domain Synthesis, a novel strategy that employs diffusion models to synthesize novel, pseudo-domains.
Our evaluations demonstrate that this methodology sets new benchmarks in domain generalization performance across a range of challenging datasets.
arXiv Detail & Related papers (2024-07-04T02:45:29Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - FedLoGe: Joint Local and Generic Federated Learning under Long-tailed
Data [46.29190753993415]
Federated Long-Tailed Learning (Fed-LT) is a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution.
This paper introduces an approach termed Federated Local and Generic Model Training in Fed-LT (FedLoGe), which enhances both local and generic model performance.
arXiv Detail & Related papers (2024-01-17T05:04:33Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - Decentralised Person Re-Identification with Selective Knowledge
Aggregation [56.40855978874077]
Existing person re-identification (Re-ID) methods mostly follow a centralised learning paradigm which shares all training data to a collection for model learning.
Two recent works have introduced decentralised (federated) Re-ID learning for constructing a globally generalised model (server)
However, these methods are poor on how to adapt the generalised model to maximise its performance on individual client domain Re-ID tasks.
We present a new Selective Knowledge Aggregation approach to decentralised person Re-ID to optimise the trade-off between model personalisation and generalisation.
arXiv Detail & Related papers (2021-10-21T18:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.