Differential Privacy for Adaptive Weight Aggregation in Federated Tumor
Segmentation
- URL: http://arxiv.org/abs/2308.00856v1
- Date: Tue, 1 Aug 2023 21:59:22 GMT
- Title: Differential Privacy for Adaptive Weight Aggregation in Federated Tumor
Segmentation
- Authors: Muhammad Irfan Khan, Esa Alhoniemi, Elina Kontio, Suleiman A. Khan,
and Mojtaba Jafaritadi
- Abstract summary: Federated Learning (FL) is a distributed machine learning approach that safeguards privacy by creating an impartial global model while respecting the privacy of individual client data.
We present a differential privacy (DP) federated deep learning framework in medical image segmentation.
We extend our similarity weight aggregation (SimAgg) method to DP-SimAgg algorithm, a differentially private similarity-weighted aggregation algorithm for brain tumor segmentation.
- Score: 0.16746114653388383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning approach that
safeguards privacy by creating an impartial global model while respecting the
privacy of individual client data. However, the conventional FL method can
introduce security risks when dealing with diverse client data, potentially
compromising privacy and data integrity. To address these challenges, we
present a differential privacy (DP) federated deep learning framework in
medical image segmentation. In this paper, we extend our similarity weight
aggregation (SimAgg) method to DP-SimAgg algorithm, a differentially private
similarity-weighted aggregation algorithm for brain tumor segmentation in
multi-modal magnetic resonance imaging (MRI). Our DP-SimAgg method not only
enhances model segmentation capabilities but also provides an additional layer
of privacy preservation. Extensive benchmarking and evaluation of our
framework, with computational performance as a key consideration, demonstrate
that DP-SimAgg enables accurate and robust brain tumor segmentation while
minimizing communication costs during model training. This advancement is
crucial for preserving the privacy of medical image data and safeguarding
sensitive information. In conclusion, adding a differential privacy layer in
the global weight aggregation phase of the federated brain tumor segmentation
provides a promising solution to privacy concerns without compromising
segmentation model efficacy. By leveraging DP, we ensure the protection of
client data against adversarial attacks and malicious participants.
Related papers
- FedDP: Privacy-preserving method based on federated learning for histopathology image segmentation [2.864354559973703]
This paper addresses the dispersed nature and privacy sensitivity of medical image data by employing a federated learning framework.
The proposed method, FedDP, minimally impacts model accuracy while effectively safeguarding the privacy of cancer pathology image data.
arXiv Detail & Related papers (2024-11-07T08:02:58Z) - Empowering Healthcare through Privacy-Preserving MRI Analysis [3.6394715554048234]
We introduce the Ensemble-Based Federated Learning (EBFL) Framework.
EBFL framework deviates from the conventional approach by emphasizing model features over sharing sensitive patient data.
We have achieved remarkable precision in the classification of brain tumors, including glioma, meningioma, pituitary, and non-tumor instances.
arXiv Detail & Related papers (2024-03-14T19:51:18Z) - Client-Level Differential Privacy via Adaptive Intermediary in Federated
Medical Imaging [33.494287036763716]
Trade-off of differential privacy (DP) between privacy protection and performance is still underexplored for real-world medical scenario.
We propose to optimize the trade-off under the context of client-level DP, which focuses on privacy during communications.
We propose an adaptive intermediary strategy to improve performance without harming privacy.
arXiv Detail & Related papers (2023-07-24T06:12:37Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Personalized and privacy-preserving federated heterogeneous medical
image analysis with PPPML-HMI [15.031967569155748]
PPPML-HMI is an open-source learning paradigm for personalized and privacy-preserving heterogeneous medical image analysis.
To our best knowledge, personalization and privacy protection were achieved simultaneously for the first time under the federated scenario.
For the real-world task, PPPML-HMI achieved $sim$5% higher Dice score on average compared to conventional FL.
arXiv Detail & Related papers (2023-02-20T07:37:03Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Complex-valued Federated Learning with Differential Privacy and MRI Applications [51.34714485616763]
We introduce the complex-valued Gaussian mechanism, whose behaviour we characterise in terms of $f$-DP, $(varepsilon, delta)$-DP and R'enyi-DP.
We present novel complex-valued neural network primitives compatible with DP.
Experimentally, we showcase a proof-of-concept by training federated complex-valued neural networks with DP on a real-world task.
arXiv Detail & Related papers (2021-10-07T14:03:00Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.