Privacy Risks Analysis and Mitigation in Federated Learning for Medical
Images
- URL: http://arxiv.org/abs/2311.06643v2
- Date: Wed, 31 Jan 2024 18:06:16 GMT
- Title: Privacy Risks Analysis and Mitigation in Federated Learning for Medical
Images
- Authors: Badhan Chandra Das, M. Hadi Amini, Yanzhao Wu
- Abstract summary: Federated learning (FL) is gaining increasing popularity in the medical domain for analyzing medical images.
Recent studies have revealed that the default settings of FL may leak private training data under privacy attacks.
- Score: 2.9480813253164535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is gaining increasing popularity in the medical
domain for analyzing medical images, which is considered an effective technique
to safeguard sensitive patient data and comply with privacy regulations.
However, several recent studies have revealed that the default settings of FL
may leak private training data under privacy attacks. Thus, it is still unclear
whether and to what extent such privacy risks of FL exist in the medical
domain, and if so, "how to mitigate such risks?". In this paper, first, we
propose a holistic framework for Medical data Privacy risk analysis and
mitigation in Federated Learning (MedPFL) to analyze privacy risks and develop
effective mitigation strategies in FL for protecting private medical data.
Second, we demonstrate the substantial privacy risks of using FL to process
medical images, where adversaries can easily perform privacy attacks to
reconstruct private medical images accurately. Third, we show that the defense
approach of adding random noises may not always work effectively to protect
medical images against privacy attacks in FL, which poses unique and pressing
challenges associated with medical data for privacy protection.
Related papers
- Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - In-depth Analysis of Privacy Threats in Federated Learning for Medical Data [2.6986500640871482]
Federated learning is emerging as a promising machine learning technique in the medical field for analyzing medical images.
Recent studies have revealed that the default settings of federated learning may inadvertently expose private training data to privacy attacks.
We make three original contributions to privacy risk analysis and mitigation in federated learning for medical data.
arXiv Detail & Related papers (2024-09-27T16:45:35Z) - Vision Through the Veil: Differential Privacy in Federated Learning for
Medical Image Classification [15.382184404673389]
The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions.
Privacy-preserving mechanisms are paramount in medical image analysis, where the data being sensitive in nature.
This study addresses the need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification.
arXiv Detail & Related papers (2023-06-30T16:48:58Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - A Review of Anonymization for Healthcare Data [0.30586855806896046]
Health data is highly sensitive and subject to regulations such as General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation (
arXiv Detail & Related papers (2021-04-13T21:44:29Z) - Defending Medical Image Diagnostics against Privacy Attacks using
Generative Methods [10.504951891644474]
We develop and evaluate a privacy defense protocol based on using a generative adversarial network (GAN)
We validate the proposed method on retinal diagnostics AI used for diabetic retinopathy that bears the risk of possibly leaking private information.
arXiv Detail & Related papers (2021-03-04T15:02:57Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z) - PGLP: Customizable and Rigorous Location Privacy through Policy Graph [68.3736286350014]
We propose a new location privacy notion called PGLP, which provides a rich interface to release private locations with customizable and rigorous privacy guarantee.
Specifically, we formalize a user's location privacy requirements using a textitlocation policy graph, which is expressive and customizable.
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
arXiv Detail & Related papers (2020-05-04T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.