Privacy-Preserving Medical Image Classification through Deep Learning
and Matrix Decomposition
- URL: http://arxiv.org/abs/2308.16530v1
- Date: Thu, 31 Aug 2023 08:21:09 GMT
- Title: Privacy-Preserving Medical Image Classification through Deep Learning
and Matrix Decomposition
- Authors: Andreea Bianca Popescu, Cosmin Ioan Nita, Ioana Antonia Taca, Anamaria
Vizitiu, Lucian Mihai Itu
- Abstract summary: Deep learning (DL) solutions have been extensively researched in the medical domain in recent years.
The usage of health-related data is strictly regulated, processing medical records outside the hospital environment demands robust data protection measures.
In this paper, we use singular value decomposition (SVD) and principal component analysis (PCA) to obfuscate the medical images before employing them in the DL analysis.
The capability of DL algorithms to extract relevant information from secured data is assessed on a task of angiographic view classification based on obfuscated frames.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL)-based solutions have been extensively researched in the
medical domain in recent years, enhancing the efficacy of diagnosis, planning,
and treatment. Since the usage of health-related data is strictly regulated,
processing medical records outside the hospital environment for developing and
using DL models demands robust data protection measures. At the same time, it
can be challenging to guarantee that a DL solution delivers a minimum level of
performance when being trained on secured data, without being specifically
designed for the given task. Our approach uses singular value decomposition
(SVD) and principal component analysis (PCA) to obfuscate the medical images
before employing them in the DL analysis. The capability of DL algorithms to
extract relevant information from secured data is assessed on a task of
angiographic view classification based on obfuscated frames. The security level
is probed by simulated artificial intelligence (AI)-based reconstruction
attacks, considering two threat actors with different prior knowledge of the
targeted data. The degree of privacy is quantitatively measured using
similarity indices. Although a trade-off between privacy and accuracy should be
considered, the proposed technique allows for training the angiographic view
classifier exclusively on secured data with satisfactory performance and with
no computational overhead, model adaptation, or hyperparameter tuning. While
the obfuscated medical image content is well protected against human
perception, the hypothetical reconstruction attack proved that it is also
difficult to recover the complete information of the original frames.
Related papers
- FedDP: Privacy-preserving method based on federated learning for histopathology image segmentation [2.864354559973703]
This paper addresses the dispersed nature and privacy sensitivity of medical image data by employing a federated learning framework.
The proposed method, FedDP, minimally impacts model accuracy while effectively safeguarding the privacy of cancer pathology image data.
arXiv Detail & Related papers (2024-11-07T08:02:58Z) - Remembering Everything Makes You Vulnerable: A Limelight on Machine Unlearning for Personalized Healthcare Sector [0.873811641236639]
This thesis aims to address the vulnerability of personalized healthcare models, particularly in the context of ECG monitoring.
We propose an approach termed "Machine Unlearning" to mitigate the impact of exposed data points on machine learning models.
arXiv Detail & Related papers (2024-07-05T15:38:36Z) - Medical Unlearnable Examples: Securing Medical Data from Unauthorized Training via Sparsity-Aware Local Masking [24.850260039814774]
Fears of unauthorized use, like training commercial AI models, hinder researchers from sharing their valuable datasets.
We propose the Sparsity-Aware Local Masking (SALM) method, which selectively perturbs significant pixel regions rather than the entire image.
Our experiments demonstrate that SALM effectively prevents unauthorized training of different models and outperforms previous SoTA data protection methods.
arXiv Detail & Related papers (2024-03-15T02:35:36Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Defending Medical Image Diagnostics against Privacy Attacks using
Generative Methods [10.504951891644474]
We develop and evaluate a privacy defense protocol based on using a generative adversarial network (GAN)
We validate the proposed method on retinal diagnostics AI used for diabetic retinopathy that bears the risk of possibly leaking private information.
arXiv Detail & Related papers (2021-03-04T15:02:57Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.