Application of Homomorphic Encryption in Medical Imaging
- URL: http://arxiv.org/abs/2110.07768v1
- Date: Tue, 12 Oct 2021 19:57:12 GMT
- Title: Application of Homomorphic Encryption in Medical Imaging
- Authors: Francis Dutil, Alexandre See, Lisa Di Jorio and Florent Chandelier
- Abstract summary: We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
- Score: 60.51436886110803
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this technical report, we explore the use of homomorphic encryption (HE)
in the context of training and predicting with deep learning (DL) models to
deliver strict \textit{Privacy by Design} services, and to enforce a zero-trust
model of data governance. First, we show how HE can be used to make predictions
over medical images while preventing unauthorized secondary use of data, and
detail our results on a disease classification task with OCT images. Then, we
demonstrate that HE can be used to secure the training of DL models through
federated learning, and report some experiments using 3D chest CT-Scans for a
nodule detection task.
Related papers
- Robustness Testing of Black-Box Models Against CT Degradation Through Test-Time Augmentation [1.7788343872869767]
Deep learning models for medical image segmentation and object detection are becoming increasingly available as clinical products.
As details are rarely provided about the training data, models may unexpectedly fail when cases differ from those in the training distribution.
A method to test the robustness of these models against CT image quality variation is presented.
arXiv Detail & Related papers (2024-06-27T22:17:49Z) - Unsupervised Contrastive Analysis for Salient Pattern Detection using Conditional Diffusion Models [13.970483987621135]
Contrastive Analysis (CA) aims to identify patterns in images that allow distinguishing between a background (BG) dataset and a target (TG) dataset (i.e. unhealthy subjects)
Recent works on this topic rely on variational autoencoders (VAE) or contrastive learning strategies to learn the patterns that separate TG samples from BG samples in a supervised manner.
We employ a self-supervised contrastive encoder to learn a latent representation encoding only common patterns from input images, using samples exclusively from the BG dataset during training, and approximating the distribution of the target patterns by leveraging data augmentation techniques.
arXiv Detail & Related papers (2024-06-02T15:19:07Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Self-supervised Model Based on Masked Autoencoders Advance CT Scans
Classification [0.0]
This paper is inspired by the self-supervised learning algorithm MAE.
It uses the MAE model pre-trained on ImageNet to perform transfer learning on CT Scans dataset.
This method improves the generalization performance of the model and avoids the risk of overfitting on small datasets.
arXiv Detail & Related papers (2022-10-11T00:52:05Z) - Slice-level Detection of Intracranial Hemorrhage on CT Using Deep
Descriptors of Adjacent Slices [0.31317409221921133]
We propose a new strategy to train emphslice-level classifiers on CT scans based on the descriptors of the adjacent slices along the axis.
We obtain a single model in the top 4% best-performing solutions of the RSNA Intracranial Hemorrhage dataset challenge.
The proposed method is general and can be applied to other 3D medical diagnosis tasks such as MRI imaging.
arXiv Detail & Related papers (2022-08-05T23:20:37Z) - Assessing Privacy Leakage in Synthetic 3-D PET Imaging using Transversal
GAN [2.0764611233067534]
We introduce our 3-D generative model, Transversal GAN (TrGAN) using head & neck PET images conditioned on tumour masks as a case study.
We show that the discriminator of the TrGAN is vulnerable to attack, and that an attacker can identify which samples were used in training with almost perfect accuracy.
This suggests that TrGAN generators, but not discriminators, may be used for sharing synthetic 3-D PET data with minimal privacy risk.
arXiv Detail & Related papers (2022-06-13T20:02:32Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.