Differentially private federated deep learning for multi-site medical
image segmentation
- URL: http://arxiv.org/abs/2107.02586v1
- Date: Tue, 6 Jul 2021 12:57:32 GMT
- Title: Differentially private federated deep learning for multi-site medical
image segmentation
- Authors: Alexander Ziller, Dmitrii Usynin, Nicolas Remerscheid, Moritz Knolle,
Marcus Makowski, Rickmer Braren, Daniel Rueckert, Georgios Kaissis
- Abstract summary: Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
- Score: 56.30543374146002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative machine learning techniques such as federated learning (FL)
enable the training of models on effectively larger datasets without data
transfer. Recent initiatives have demonstrated that segmentation models trained
with FL can achieve performance similar to locally trained models. However, FL
is not a fully privacy-preserving technique and privacy-centred attacks can
disclose confidential patient data. Thus, supplementing FL with
privacy-enhancing technologies (PTs) such as differential privacy (DP) is a
requirement for clinical applications in a multi-institutional setting. The
application of PTs to FL in medical imaging and the trade-offs between privacy
guarantees and model utility, the ramifications on training performance and the
susceptibility of the final models to attacks have not yet been conclusively
investigated. Here we demonstrate the first application of differentially
private gradient descent-based FL on the task of semantic segmentation in
computed tomography. We find that high segmentation performance is possible
under strong privacy guarantees with an acceptable training time penalty. We
furthermore demonstrate the first successful gradient-based model inversion
attack on a semantic segmentation model and show that the application of DP
prevents it from divulging sensitive image features.
Related papers
- FACMIC: Federated Adaptative CLIP Model for Medical Image Classification [12.166024140377337]
We introduce a federated adaptive Contrastive Language Image Pretraining CLIP model for classification tasks.
We employ a light-weight and efficient feature attention module for CLIP that selects suitable features for each client's data.
We propose a domain adaptation technique to reduce differences in data distribution between clients.
arXiv Detail & Related papers (2024-10-08T13:24:10Z) - Enhancing the Utility of Privacy-Preserving Cancer Classification using Synthetic Data [5.448470199971472]
Deep learning holds immense promise for aiding radiologists in breast cancer detection.
achieving optimal model performance is hampered by limitations in availability and sharing of data.
Traditional deep learning models can inadvertently leak sensitive training information.
This work addresses these challenges exploring quantifying the utility of privacy-preserving deep learning techniques.
arXiv Detail & Related papers (2024-07-17T15:52:45Z) - Privacy Preserving Federated Learning in Medical Imaging with Uncertainty Estimation [15.63535423357971]
Machine learning (ML) and Artificial Intelligence (AI) have fueled remarkable advancements, particularly in healthcare. Within medical imaging, ML models hold the promise of improving disease diagnoses, treatment planning, and post-treatment monitoring.
Privacy concerns surrounding patient data hinder the assembly of large training datasets needed for developing and training accurate, robust, and generalizable models.
Federated Learning (FL) emerges as a compelling solution, enabling organizations to collaborate on ML model training by sharing model training information (gradients) rather than data (e.g., medical images)
arXiv Detail & Related papers (2024-06-18T17:35:52Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Label-Efficient Self-Supervised Federated Learning for Tackling Data
Heterogeneity in Medical Imaging [23.08596805950814]
We present a robust and label-efficient self-supervised FL framework for medical image analysis.
Specifically, we introduce a novel distributed self-supervised pre-training paradigm into the existing FL pipeline.
We show that our self-supervised FL algorithm generalizes well to out-of-distribution data and learns federated models more effectively in limited label scenarios.
arXiv Detail & Related papers (2022-05-17T18:33:43Z) - Federated Contrastive Learning for Volumetric Medical Image Segmentation [16.3860181959878]
Federated learning (FL) can help in this regard by learning a shared model while keeping training data local for privacy.
Traditional FL requires fully-labeled data for training, which is inconvenient or sometimes infeasible to obtain.
In this work, we propose an FCL framework for volumetric medical image segmentation with limited annotations.
arXiv Detail & Related papers (2022-04-23T03:47:23Z) - Closing the Generalization Gap of Cross-silo Federated Medical Image
Segmentation [66.44449514373746]
Cross-silo federated learning (FL) has attracted much attention in medical imaging analysis with deep learning in recent years.
There can be a gap between the model trained from FL and one from centralized training.
We propose a novel training framework FedSM to avoid client issue and successfully close the drift gap.
arXiv Detail & Related papers (2022-03-18T19:50:07Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.