Optimizing Performance of Federated Person Re-identification:
Benchmarking and Analysis
- URL: http://arxiv.org/abs/2205.12144v1
- Date: Tue, 24 May 2022 15:20:32 GMT
- Title: Optimizing Performance of Federated Person Re-identification:
Benchmarking and Analysis
- Authors: Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
- Abstract summary: FedReID implements federated learning, an emerging distributed training method, to person ReID.
FedReID preserves data privacy by aggregating model updates, instead of raw data, from clients to a central server.
- Score: 14.545746907150436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasingly stringent data privacy regulations limit the development of
person re-identification (ReID) because person ReID training requires
centralizing an enormous amount of data that contains sensitive personal
information. To address this problem, we introduce federated person
re-identification (FedReID) -- implementing federated learning, an emerging
distributed training method, to person ReID. FedReID preserves data privacy by
aggregating model updates, instead of raw data, from clients to a central
server. Furthermore, we optimize the performance of FedReID under statistical
heterogeneity via benchmark analysis. We first construct a benchmark with an
enhanced algorithm, two architectures, and nine person ReID datasets with large
variances to simulate the real-world statistical heterogeneity. The benchmark
results present insights and bottlenecks of FedReID under statistical
heterogeneity, including challenges in convergence and poor performance on
datasets with large volumes. Based on these insights, we propose three
optimization approaches: (1) We adopt knowledge distillation to facilitate the
convergence of FedReID by better transferring knowledge from clients to the
server; (2) We introduce client clustering to improve the performance of large
datasets by aggregating clients with similar data distributions; (3) We propose
cosine distance weight to elevate performance by dynamically updating the
weights for aggregation depending on how well models are trained in clients.
Extensive experiments demonstrate that these approaches achieve satisfying
convergence with much better performance on all datasets. We believe that
FedReID will shed light on implementing and optimizing federated learning on
more computer vision applications.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with
Clustered Aggregation and Knowledge DIStilled Regularization [3.3711670942444014]
Federated learning enables edge devices to train a global model collaboratively without exposing their data.
We tackle a new type of Non-IID data, called cluster-skewed non-IID, discovered in actual data sets.
We propose an aggregation scheme that guarantees equality between clusters.
arXiv Detail & Related papers (2023-02-21T02:53:37Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Aggregation Delayed Federated Learning [20.973999078271483]
Federated learning is a distributed machine learning paradigm where multiple data owners (clients) collaboratively train one machine learning model while keeping data on their own devices.
Studies have found performance reduction with standard federated algorithms, such as FedAvg, on non-IID data.
Many existing works on handling non-IID data adopt the same aggregation framework as FedAvg and focus on improving model updates either on the server side or on clients.
In this work, we tackle this challenge by introducing redistribution rounds that delay the aggregation. We perform experiments on multiple tasks and show that the proposed framework significantly improves the performance on non-IID
arXiv Detail & Related papers (2021-08-17T04:06:10Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z) - Performance Optimization for Federated Person Re-identification via
Benchmark Analysis [25.9422385039648]
Federated learning is a privacy-preserving machine learning technique that learns a shared model across decentralized clients.
In this work, we implement federated learning to person re-identification (FedReID) and optimize its performance in the real-world scenario.
arXiv Detail & Related papers (2020-08-26T13:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.