Defending Medical Image Diagnostics against Privacy Attacks using
Generative Methods
- URL: http://arxiv.org/abs/2103.03078v1
- Date: Thu, 4 Mar 2021 15:02:57 GMT
- Title: Defending Medical Image Diagnostics against Privacy Attacks using
Generative Methods
- Authors: William Paul, Yinzhi Cao, Miaomiao Zhang, and Phil Burlina
- Abstract summary: We develop and evaluate a privacy defense protocol based on using a generative adversarial network (GAN)
We validate the proposed method on retinal diagnostics AI used for diabetic retinopathy that bears the risk of possibly leaking private information.
- Score: 10.504951891644474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) models used in medical imaging diagnostics can be
vulnerable to a variety of privacy attacks, including membership inference
attacks, that lead to violations of regulations governing the use of medical
data and threaten to compromise their effective deployment in the clinic. In
contrast to most recent work in privacy-aware ML that has been focused on model
alteration and post-processing steps, we propose here a novel and complementary
scheme that enhances the security of medical data by controlling the data
sharing process. We develop and evaluate a privacy defense protocol based on
using a generative adversarial network (GAN) that allows a medical data sourcer
(e.g. a hospital) to provide an external agent (a modeler) a proxy dataset
synthesized from the original images, so that the resulting diagnostic systems
made available to model consumers is rendered resilient to privacy attackers.
We validate the proposed method on retinal diagnostics AI used for diabetic
retinopathy that bears the risk of possibly leaking private information. To
incorporate concerns of both privacy advocates and modelers, we introduce a
metric to evaluate privacy and utility performance in combination, and
demonstrate, using these novel and classical metrics, that our approach, by
itself or in conjunction with other defenses, provides state of the art (SOTA)
performance for defending against privacy attacks.
Related papers
- FedDP: Privacy-preserving method based on federated learning for histopathology image segmentation [2.864354559973703]
This paper addresses the dispersed nature and privacy sensitivity of medical image data by employing a federated learning framework.
The proposed method, FedDP, minimally impacts model accuracy while effectively safeguarding the privacy of cancer pathology image data.
arXiv Detail & Related papers (2024-11-07T08:02:58Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Explainable Machine Learning-Based Security and Privacy Protection Framework for Internet of Medical Things Systems [1.8434042562191815]
The Internet of Medical Things (IoMT) transcends traditional medical boundaries, enabling a transition from reactive treatment to proactive prevention.
Its benefits are countered by significant security challenges that endanger the lives of its users due to the sensitivity and value of the processed data.
A new framework for Intrusion Detection Systems (IDS) is introduced, leveraging Artificial Neural Networks (ANN) for intrusion detection while utilizing Federated Learning (FL) for privacy preservation.
arXiv Detail & Related papers (2024-03-14T11:57:26Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy-Preserving Medical Image Classification through Deep Learning
and Matrix Decomposition [0.0]
Deep learning (DL) solutions have been extensively researched in the medical domain in recent years.
The usage of health-related data is strictly regulated, processing medical records outside the hospital environment demands robust data protection measures.
In this paper, we use singular value decomposition (SVD) and principal component analysis (PCA) to obfuscate the medical images before employing them in the DL analysis.
The capability of DL algorithms to extract relevant information from secured data is assessed on a task of angiographic view classification based on obfuscated frames.
arXiv Detail & Related papers (2023-08-31T08:21:09Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Application of Homomorphic Encryption in Medical Imaging [60.51436886110803]
We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
arXiv Detail & Related papers (2021-10-12T19:57:12Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - A Review of Anonymization for Healthcare Data [0.30586855806896046]
Health data is highly sensitive and subject to regulations such as General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation ( General Data Protection Regulation (
arXiv Detail & Related papers (2021-04-13T21:44:29Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.