DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning
- URL: http://arxiv.org/abs/2409.07734v2
- Date: Mon, 16 Sep 2024 08:18:59 GMT
- Title: DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning
- Authors: Kangyang Luo, Shuai Wang, Yexuan Fu, Renrong Shao, Xiang Li, Yunshi Lan, Ming Gao, Jinlong Shu,
- Abstract summary: Federated Learning (FL) is a distributed machine learning scheme in which clients jointly participate in the collaborative training of a global model.
We propose a new data-free dual-generator adversarial distillation method (namely DFDG) for one-shot FL.
DFDG is executed in an adversarial manner and comprises two parts: dual-generator training and dual-model distillation.
- Score: 17.34783038347845
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning scheme in which clients jointly participate in the collaborative training of a global model by sharing model information rather than their private datasets. In light of concerns associated with communication and privacy, one-shot FL with a single communication round has emerged as a de facto promising solution. However, existing one-shot FL methods either require public datasets, focus on model homogeneous settings, or distill limited knowledge from local models, making it difficult or even impractical to train a robust global model. To address these limitations, we propose a new data-free dual-generator adversarial distillation method (namely DFDG) for one-shot FL, which can explore a broader local models' training space via training dual generators. DFDG is executed in an adversarial manner and comprises two parts: dual-generator training and dual-model distillation. In dual-generator training, we delve into each generator concerning fidelity, transferability and diversity to ensure its utility, and additionally tailor the cross-divergence loss to lessen the overlap of dual generators' output spaces. In dual-model distillation, the trained dual generators work together to provide the training data for updates of the global model. At last, our extensive experiments on various image classification tasks show that DFDG achieves significant performance gains in accuracy compared to SOTA baselines.
Related papers
- Federated Model Heterogeneous Matryoshka Representation Learning [33.04969829305812]
Model heterogeneous federated learning (MteroFL) enables FL clients to collaboratively train models with heterogeneous structures in a distributed fashion.
Existing methods rely on training loss to transfer knowledge between a MteroFL server and a client model.
We propose a new representation approach for supervised learning tasks using Matryoshka models.
arXiv Detail & Related papers (2024-06-01T16:37:08Z) - Robust Training of Federated Models with Extremely Label Deficiency [84.00832527512148]
Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.
We propose a novel twin-model paradigm, called Twin-sight, designed to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data.
Our comprehensive experiments on four benchmark datasets provide substantial evidence that Twin-sight can significantly outperform state-of-the-art methods across various experimental settings.
arXiv Detail & Related papers (2024-02-22T10:19:34Z) - pFedES: Model Heterogeneous Personalized Federated Learning with Feature
Extractor Sharing [19.403843478569303]
We propose a model-heterogeneous personalized Federated learning approach based on feature extractor sharing.
It incorporates a small homogeneous feature extractor into each client's heterogeneous local model.
It achieves 1.61% higher test accuracy, while reducing communication and computation costs by 99.6% and 82.9%, respectively.
arXiv Detail & Related papers (2023-11-12T15:43:39Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - DFRD: Data-Free Robustness Distillation for Heterogeneous Federated
Learning [20.135235291912185]
Federated Learning (FL) is a privacy-constrained decentralized machine learning paradigm.
We propose a new FL method (namely DFRD) to learn a robust global model in the data-heterogeneous and model-heterogeneous FL scenarios.
arXiv Detail & Related papers (2023-09-24T04:29:22Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - FedSiam-DA: Dual-aggregated Federated Learning via Siamese Network under
Non-IID Data [21.95009868875851]
Federated learning can address data island, it remains challenging to train with data heterogeneous in a real application.
We propose FedSiam-DA, a novel dual-aggregated contrastive federated learning approach.
arXiv Detail & Related papers (2022-11-17T09:05:25Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN [80.17705319689139]
We propose a data-free knowledge amalgamate strategy to craft a well-behaved multi-task student network from multiple single/multi-task teachers.
The proposed method without any training data achieves the surprisingly competitive results, even compared with some full-supervised methods.
arXiv Detail & Related papers (2020-03-20T03:20:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.