Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models
- URL: http://arxiv.org/abs/2104.02107v1
- Date: Mon, 5 Apr 2021 18:23:36 GMT
- Title: Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models
- Authors: Neal Mangaokar, Jiameng Pu, Parantapa Bhattacharya, Chandan K. Reddy,
Bimal Viswanath
- Abstract summary: Jekyll is a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition.
We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes.
We also investigate defensive measures based on machine learning to detect images generated by Jekyll.
- Score: 8.853343040790795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in deep neural networks (DNNs) have shown tremendous promise in the
medical domain. However, the deep learning tools that are helping the domain,
can also be used against it. Given the prevalence of fraud in the healthcare
domain, it is important to consider the adversarial use of DNNs in manipulating
sensitive data that is crucial to patient healthcare. In this work, we present
the design and implementation of a DNN-based image translation attack on
biomedical imagery. More specifically, we propose Jekyll, a neural style
transfer framework that takes as input a biomedical image of a patient and
translates it to a new image that indicates an attacker-chosen disease
condition. The potential for fraudulent claims based on such generated 'fake'
medical images is significant, and we demonstrate successful attacks on both
X-rays and retinal fundus image modalities. We show that these attacks manage
to mislead both medical professionals and algorithmic detection schemes.
Lastly, we also investigate defensive measures based on machine learning to
detect images generated by Jekyll.
Related papers
- MITS-GAN: Safeguarding Medical Imaging from Tampering with Generative Adversarial Networks [48.686454485328895]
This study introduces MITS-GAN, a novel approach to prevent tampering in medical images.
The approach disrupts the output of the attacker's CT-GAN architecture by introducing finely tuned perturbations that are imperceptible to the human eye.
Experimental results on a CT scan demonstrate MITS-GAN's superior performance.
arXiv Detail & Related papers (2024-01-17T22:30:41Z) - Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis [54.60796004113496]
We demonstrate that the eye movement of radiologists reading medical images can be a new form of supervision to train the DNN-based computer-aided diagnosis (CAD) system.
We record the tracks of the radiologists' gaze when they are reading images.
The gaze information is processed and then used to supervise the DNN's attention via an Attention Consistency module.
arXiv Detail & Related papers (2022-04-06T08:31:05Z) - The Security of Deep Learning Defences for Medical Imaging [36.060636819669604]
We show that an informed attacker can evade five of the current state of the art defences while successfully fooling the victim's deep learning model.
We suggest better alternatives for securing healthcare DNNs from such attacks: (1) harden the system's security and (2) use digital signatures.
arXiv Detail & Related papers (2022-01-21T12:11:17Z) - FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis [82.2511780233828]
We propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various medical image analysis tasks.
Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images.
arXiv Detail & Related papers (2021-12-02T11:52:17Z) - Pathology-Aware Generative Adversarial Networks for Medical Image
Augmentation [0.22843885788439805]
Generative Adversarial Networks (GANs) can generate realistic but novel samples, and thus effectively cover the real image distribution.
This thesis contains four GAN projects aiming to present such novel applications' clinical relevance in collaboration with physicians.
arXiv Detail & Related papers (2021-06-03T15:08:14Z) - Adversarial Robustness Study of Convolutional Neural Network for Lumbar
Disk Shape Reconstruction from MR images [1.2809525640002362]
In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images.
The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness.
arXiv Detail & Related papers (2021-02-04T20:57:49Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Bias Field Poses a Threat to DNN-based X-Ray Recognition [21.317001512826476]
bias field caused by the improper medical image acquisition process widely exists in the chest X-ray images.
In this paper, we study this problem based on the recent adversarial attack and propose a brand new attack.
Our method reveals the potential threat to the DNN-based X-ray automated diagnosis and can definitely benefit the development of bias-field-robust automated diagnosis system.
arXiv Detail & Related papers (2020-09-19T14:58:02Z) - Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading [75.73437831338907]
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world.
To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs)
RFIs are commonly affected by camera exposure issues that may lead to incorrect grades.
In this paper, we study this problem from the viewpoint of adversarial attacks.
arXiv Detail & Related papers (2020-09-19T13:47:33Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.