Securing the Classification of COVID-19 in Chest X-ray Images: A
Privacy-Preserving Deep Learning Approach
- URL: http://arxiv.org/abs/2203.07728v1
- Date: Tue, 15 Mar 2022 08:48:47 GMT
- Title: Securing the Classification of COVID-19 in Chest X-ray Images: A
Privacy-Preserving Deep Learning Approach
- Authors: Wadii Boulila, Adel Ammar, Bilel Benjdira, Anis Koubaa
- Abstract summary: We propose a privacy-preserving deep learning (PPDL)-based approach to secure the classification of Chest X-ray images.
The proposed approach is based on two steps: encrypting the dataset using partially homomorphic encryption and training/testing the DL algorithm over the encrypted images.
Experimental results on the COVID-19 Radiography database show that the MobileNetV2 model achieves an accuracy of 94.2% over the plain data and 93.3% over the encrypted data.
- Score: 1.4146420810689422
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning (DL) is being increasingly utilized in healthcare-related
fields due to its outstanding efficiency. However, we have to keep the
individual health data used by DL models private and secure. Protecting data
and preserving the privacy of individuals has become an increasingly prevalent
issue. The gap between the DL and privacy communities must be bridged. In this
paper, we propose privacy-preserving deep learning (PPDL)-based approach to
secure the classification of Chest X-ray images. This study aims to use Chest
X-ray images to their fullest potential without compromising the privacy of the
data that it contains. The proposed approach is based on two steps: encrypting
the dataset using partially homomorphic encryption and training/testing the DL
algorithm over the encrypted images. Experimental results on the COVID-19
Radiography database show that the MobileNetV2 model achieves an accuracy of
94.2% over the plain data and 93.3% over the encrypted data.
Related papers
- Privacy-Preserving Medical Image Classification through Deep Learning
and Matrix Decomposition [0.0]
Deep learning (DL) solutions have been extensively researched in the medical domain in recent years.
The usage of health-related data is strictly regulated, processing medical records outside the hospital environment demands robust data protection measures.
In this paper, we use singular value decomposition (SVD) and principal component analysis (PCA) to obfuscate the medical images before employing them in the DL analysis.
The capability of DL algorithms to extract relevant information from secured data is assessed on a task of angiographic view classification based on obfuscated frames.
arXiv Detail & Related papers (2023-08-31T08:21:09Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging [47.99192239793597]
We evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Our study shows that -- under the challenging realistic circumstances of a real-life clinical dataset -- the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.
arXiv Detail & Related papers (2023-02-03T09:49:13Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Deep Learning-based Anonymization of Chest Radiographs: A
Utility-preserving Measure for Patient Privacy [7.240611820374677]
The conventional anonymization process is carried out by obscuring personal information in the images with black boxes.
Such simple measures retain biometric information in the chest radiographs, allowing patients to be re-identified by a linkage attack.
We propose the first deep learning-based approach (PriCheXy-Net) to targetedly anonymize chest radiographs.
arXiv Detail & Related papers (2022-09-23T11:36:32Z) - Privacy-Preserving Deep Learning Model for Covid-19 Disease Detection [3.351714665243138]
We propose differential private deep learning models to secure the patients' private information.
The accuracy is noted by varying the trainable layers, privacy loss, and limiting information from each sample.
arXiv Detail & Related papers (2022-09-07T06:15:02Z) - Syfer: Neural Obfuscation for Private Data Release [58.490998583666276]
We develop Syfer, a neural obfuscation method to protect against re-identification attacks.
Syfer composes trained layers with random neural networks to encode the original data.
It maintains the ability to predict diagnoses from the encoded data.
arXiv Detail & Related papers (2022-01-28T20:32:04Z) - Application of Homomorphic Encryption in Medical Imaging [60.51436886110803]
We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
arXiv Detail & Related papers (2021-10-12T19:57:12Z) - FedDPGAN: Federated Differentially Private Generative Adversarial
Networks Framework for the Detection of COVID-19 Pneumonia [11.835113185061147]
We propose a FederatedDifferentially Private Generative Adversarial Network (FedDPGAN) to detect COVID-19 pneumonia.
The evaluation of the proposed model is on three types of chest X-ray (CXR) images dataset (COVID-19, normal, and normal pneumonia)
arXiv Detail & Related papers (2021-04-26T13:52:12Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.