Federated Learning with Privacy-Preserving Ensemble Attention
Distillation
- URL: http://arxiv.org/abs/2210.08464v1
- Date: Sun, 16 Oct 2022 06:44:46 GMT
- Title: Federated Learning with Privacy-Preserving Ensemble Attention
Distillation
- Authors: Xuan Gong, Liangchen Song, Rishi Vedula, Abhishek Sharma, Meng Zheng,
Benjamin Planche, Arun Innanje, Terrence Chen, Junsong Yuan, David Doermann,
Ziyan Wu
- Abstract summary: Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
- Score: 63.39442596910485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a machine learning paradigm where many local nodes
collaboratively train a central model while keeping the training data
decentralized. This is particularly relevant for clinical applications since
patient data are usually not allowed to be transferred out of medical
facilities, leading to the need for FL. Existing FL methods typically share
model parameters or employ co-distillation to address the issue of unbalanced
data distribution. However, they also require numerous rounds of synchronized
communication and, more importantly, suffer from a privacy leakage risk. We
propose a privacy-preserving FL framework leveraging unlabeled public data for
one-way offline knowledge distillation in this work. The central model is
learned from local knowledge via ensemble attention distillation. Our technique
uses decentralized and heterogeneous local data like existing FL approaches,
but more importantly, it significantly reduces the risk of privacy leakage. We
demonstrate that our method achieves very competitive performance with more
robust privacy preservation based on extensive experiments on image
classification, segmentation, and reconstruction tasks.
Related papers
- Privacy-Preserving Federated Learning with Consistency via Knowledge Distillation Using Conditional Generator [19.00239208095762]
Federated Learning (FL) is gaining popularity as a distributed learning framework that only shares model parameters or updates and keeps private data locally.
We propose FedMD-CG, a novel FL method with highly competitive performance and high-level privacy preservation.
We conduct extensive experiments on various image classification tasks to validate the superiority of FedMD-CG.
arXiv Detail & Related papers (2024-09-11T02:36:36Z) - Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization [16.418338197742287]
Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source.
Recent findings suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models.
We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection.
arXiv Detail & Related papers (2024-07-12T15:01:09Z) - Federated Learning via Input-Output Collaborative Distillation [40.38454921071808]
Federated learning (FL) is a machine learning paradigm in which distributed local nodes collaboratively train a central model without sharing individually held private data.
We propose a data-free FL framework based on local-to-central collaborative distillation with direct input and output space exploitation.
arXiv Detail & Related papers (2023-12-22T07:05:13Z) - Federated Learning with Reduced Information Leakage and Computation [17.069452700698047]
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
This paper introduces Upcycled-FL, a strategy that applies first-order approximation at every even round of model update.
Under this strategy, half of the FL updates incur no information leakage and require much less computational and transmission costs.
arXiv Detail & Related papers (2023-10-10T06:22:06Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.