Performance Optimization for Federated Person Re-identification via
Benchmark Analysis
- URL: http://arxiv.org/abs/2008.11560v2
- Date: Fri, 9 Oct 2020 17:57:52 GMT
- Title: Performance Optimization for Federated Person Re-identification via
Benchmark Analysis
- Authors: Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin,
Dongzhan Zhou, Shuai Zhang, Shuai Yi
- Abstract summary: Federated learning is a privacy-preserving machine learning technique that learns a shared model across decentralized clients.
In this work, we implement federated learning to person re-identification (FedReID) and optimize its performance in the real-world scenario.
- Score: 25.9422385039648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a privacy-preserving machine learning technique that
learns a shared model across decentralized clients. It can alleviate privacy
concerns of personal re-identification, an important computer vision task. In
this work, we implement federated learning to person re-identification
(FedReID) and optimize its performance affected by statistical heterogeneity in
the real-world scenario. We first construct a new benchmark to investigate the
performance of FedReID. This benchmark consists of (1) nine datasets with
different volumes sourced from different domains to simulate the heterogeneous
situation in reality, (2) two federated scenarios, and (3) an enhanced
federated algorithm for FedReID. The benchmark analysis shows that the
client-edge-cloud architecture, represented by the federated-by-dataset
scenario, has better performance than client-server architecture in FedReID. It
also reveals the bottlenecks of FedReID under the real-world scenario,
including poor performance of large datasets caused by unbalanced weights in
model aggregation and challenges in convergence. Then we propose two
optimization methods: (1) To address the unbalanced weight problem, we propose
a new method to dynamically change the weights according to the scale of model
changes in clients in each training round; (2) To facilitate convergence, we
adopt knowledge distillation to refine the server model with knowledge
generated from client models on a public dataset. Experiment results
demonstrate that our strategies can achieve much better convergence with
superior performance on all datasets. We believe that our work will inspire the
community to further explore the implementation of federated learning on more
computer vision tasks in real-world scenarios.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Momentum Benefits Non-IID Federated Learning Simply and Provably [22.800862422479913]
Federated learning is a powerful paradigm for large-scale machine learning.
FedAvg and SCAFFOLD are two prominent algorithms to address these challenges.
This paper explores the utilization of momentum to enhance the performance of FedAvg and SCAFFOLD.
arXiv Detail & Related papers (2023-06-28T18:52:27Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Optimizing Performance of Federated Person Re-identification:
Benchmarking and Analysis [14.545746907150436]
FedReID implements federated learning, an emerging distributed training method, to person ReID.
FedReID preserves data privacy by aggregating model updates, instead of raw data, from clients to a central server.
arXiv Detail & Related papers (2022-05-24T15:20:32Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.