Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning
- URL: http://arxiv.org/abs/2203.09249v2
- Date: Thu, 26 Oct 2023 03:56:58 GMT
- Title: Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning
- Authors: Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, Ling-Yu Duan
- Abstract summary: Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
- Score: 86.59588262014456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an emerging distributed learning paradigm under
privacy constraint. Data heterogeneity is one of the main challenges in FL,
which results in slow convergence and degraded performance. Most existing
approaches only tackle the heterogeneity challenge by restricting the local
model update in client, ignoring the performance drop caused by direct global
model aggregation. Instead, we propose a data-free knowledge distillation
method to fine-tune the global model in the server (FedFTG), which relieves the
issue of direct model aggregation. Concretely, FedFTG explores the input space
of local models through a generator, and uses it to transfer the knowledge from
local models to the global model. Besides, we propose a hard sample mining
scheme to achieve effective knowledge distillation throughout the training. In
addition, we develop customized label sampling and class-level ensemble to
derive maximum utilization of knowledge, which implicitly mitigates the
distribution discrepancy across clients. Extensive experiments show that our
FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and
can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and
SCAFFOLD.
Related papers
- FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - DFRD: Data-Free Robustness Distillation for Heterogeneous Federated
Learning [20.135235291912185]
Federated Learning (FL) is a privacy-constrained decentralized machine learning paradigm.
We propose a new FL method (namely DFRD) to learn a robust global model in the data-heterogeneous and model-heterogeneous FL scenarios.
arXiv Detail & Related papers (2023-09-24T04:29:22Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - The Best of Both Worlds: Accurate Global and Personalized Models through
Federated Learning with Data-Free Hyper-Knowledge Distillation [17.570719572024608]
FedHKD (Federated Hyper-Knowledge Distillation) is a novel FL algorithm in which clients rely on knowledge distillation to train local models.
Unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server.
We conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance.
arXiv Detail & Related papers (2023-01-21T16:20:57Z) - Closing the Gap between Client and Global Model Performance in
Heterogeneous Federated Learning [2.1044900734651626]
We show how the chosen approach for training custom client models has an impact on the global model.
We propose a new approach that combines KD and Learning without Forgetting (LwoF) to produce improved personalised models.
arXiv Detail & Related papers (2022-11-07T11:12:57Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Data-Free Knowledge Distillation for Heterogeneous Federated Learning [31.364314540525218]
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.
Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users.
We propose a data-free knowledge distillation approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner.
arXiv Detail & Related papers (2021-05-20T22:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.