Towards Personalized Federated Learning via Comprehensive Knowledge Distillation
- URL: http://arxiv.org/abs/2411.03569v1
- Date: Wed, 06 Nov 2024 00:17:36 GMT
- Title: Towards Personalized Federated Learning via Comprehensive Knowledge Distillation
- Authors: Pengju Wang, Bochao Liu, Weijia Guo, Yong Li, Shiming Ge,
- Abstract summary: Federated learning is a distributed machine learning paradigm designed to protect data privacy.
Data heterogeneity across various clients results in catastrophic forgetting, where the model rapidly forgets previous knowledge while acquiring new knowledge.
We present a novel personalized federated learning method that uses global and historical models as teachers and the local model as the student.
- Score: 21.026617948534707
- License:
- Abstract: Federated learning is a distributed machine learning paradigm designed to protect data privacy. However, data heterogeneity across various clients results in catastrophic forgetting, where the model rapidly forgets previous knowledge while acquiring new knowledge. To address this challenge, personalized federated learning has emerged to customize a personalized model for each client. However, the inherent limitation of this mechanism is its excessive focus on personalization, potentially hindering the generalization of those models. In this paper, we present a novel personalized federated learning method that uses global and historical models as teachers and the local model as the student to facilitate comprehensive knowledge distillation. The historical model represents the local model from the last round of client training, containing historical personalized knowledge, while the global model represents the aggregated model from the last round of server aggregation, containing global generalized knowledge. By applying knowledge distillation, we effectively transfer global generalized knowledge and historical personalized knowledge to the local model, thus mitigating catastrophic forgetting and enhancing the general performance of personalized models. Extensive experimental results demonstrate the significant advantages of our method.
Related papers
- Federated Progressive Self-Distillation with Logits Calibration for Personalized IIoT Edge Intelligence [0.1227734309612871]
This study proposes a novel PFL method, Federated Progressive Self-Distillation (FedPSD), based on logits calibration and progressive self-distillation.
To address the issue of global knowledge forgetting, we propose a logits calibration approach for the local training loss and design a progressive self-distillation strategy.
arXiv Detail & Related papers (2024-11-30T09:32:05Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Dirichlet-based Uncertainty Quantification for Personalized Federated
Learning with Improved Posterior Networks [9.54563359677778]
This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones.
It is achieved through a careful modeling of predictive uncertainties that helps to detect local and global in- and out-of-distribution data.
The comprehensive experimental evaluation on the popular real-world image datasets shows the superior performance of the model in the presence of out-of-distribution data.
arXiv Detail & Related papers (2023-12-18T14:30:05Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Decentralised Person Re-Identification with Selective Knowledge
Aggregation [56.40855978874077]
Existing person re-identification (Re-ID) methods mostly follow a centralised learning paradigm which shares all training data to a collection for model learning.
Two recent works have introduced decentralised (federated) Re-ID learning for constructing a globally generalised model (server)
However, these methods are poor on how to adapt the generalised model to maximise its performance on individual client domain Re-ID tasks.
We present a new Selective Knowledge Aggregation approach to decentralised person Re-ID to optimise the trade-off between model personalisation and generalisation.
arXiv Detail & Related papers (2021-10-21T18:09:53Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Specialized federated learning using a mixture of experts [0.6974741712647655]
In federated learning, clients share a global model that has been trained on decentralized local client data.
We propose an alternative method to learn a personalized model for each client in a federated setting.
Our results show that the mixture of experts model is better suited as a personalized model for devices in these settings.
arXiv Detail & Related papers (2020-10-05T14:43:57Z) - Survey of Personalization Techniques for Federated Learning [0.08594140167290096]
Federated learning enables machine learning models to learn from private decentralized data without compromising privacy.
This paper highlights the need for personalization and surveys recent research on this topic.
arXiv Detail & Related papers (2020-03-19T10:47:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.