Personalized Retrogress-Resilient Framework for Real-World Medical
Federated Learning
- URL: http://arxiv.org/abs/2110.00394v1
- Date: Fri, 1 Oct 2021 13:24:29 GMT
- Title: Personalized Retrogress-Resilient Framework for Real-World Medical
Federated Learning
- Authors: Zhen Chen, Meilu Zhu, Chen Yang, Yixuan Yuan
- Abstract summary: We propose a personalized retrogress-resilient framework to produce a superior personalized model for each client.
Our experiments on real-world dermoscopic FL dataset prove that our personalized retrogress-resilient framework outperforms state-of-the-art FL methods.
- Score: 8.240098954377794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, deep learning methods with large-scale datasets can produce
clinically useful models for computer-aided diagnosis. However, the privacy and
ethical concerns are increasingly critical, which make it difficult to collect
large quantities of data from multiple institutions. Federated Learning (FL)
provides a promising decentralized solution to train model collaboratively by
exchanging client models instead of private data. However, the server
aggregation of existing FL methods is observed to degrade the model performance
in real-world medical FL setting, which is termed as retrogress. To address
this problem, we propose a personalized retrogress-resilient framework to
produce a superior personalized model for each client. Specifically, we devise
a Progressive Fourier Aggregation (PFA) at the server to achieve more stable
and effective global knowledge gathering by integrating client models from
low-frequency to high-frequency gradually. Moreover, with an introduced deputy
model to receive the aggregated server model, we design a Deputy-Enhanced
Transfer (DET) strategy at the client and conduct three steps of
Recover-Exchange-Sublimate to ameliorate the personalized local model by
transferring the global knowledge smoothly. Extensive experiments on real-world
dermoscopic FL dataset prove that our personalized retrogress-resilient
framework outperforms state-of-the-art FL methods, as well as the
generalization on an out-of-distribution cohort. The code and dataset are
available at https://github.com/CityU-AIM-Group/PRR-FL.
Related papers
- FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Training Heterogeneous Client Models using Knowledge Distillation in
Serverless Federated Learning [0.5510212613486574]
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients.
Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders.
arXiv Detail & Related papers (2024-02-11T20:15:52Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - The Best of Both Worlds: Accurate Global and Personalized Models through
Federated Learning with Data-Free Hyper-Knowledge Distillation [17.570719572024608]
FedHKD (Federated Hyper-Knowledge Distillation) is a novel FL algorithm in which clients rely on knowledge distillation to train local models.
Unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server.
We conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance.
arXiv Detail & Related papers (2023-01-21T16:20:57Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.