Adversarial Exposure Attack on Diabetic Retinopathy Imagery
- URL: http://arxiv.org/abs/2009.09231v1
- Date: Sat, 19 Sep 2020 13:47:33 GMT
- Title: Adversarial Exposure Attack on Diabetic Retinopathy Imagery
- Authors: Yupeng Cheng, Felix Juefei-Xu, Qing Guo, Huazhu Fu, Xiaofei Xie,
Shang-Wei Lin, Weisi Lin, Yang Liu
- Abstract summary: Diabetic retinopathy (DR) is a leading cause of vision loss in the world and numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically classify the DR cases via the retinal fundus images (RFIs)
RFIs are usually affected by the widely existing camera exposure while the robustness of DNNs to the exposure is rarely explored.
In this paper, we study this problem from the viewpoint of adversarial attack and identify a totally new task, i.e., adversarial exposure attack generating adversarial images.
- Score: 69.90046859398014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diabetic retinopathy (DR) is a leading cause of vision loss in the world and
numerous cutting-edge works have built powerful deep neural networks (DNNs) to
automatically classify the DR cases via the retinal fundus images (RFIs).
However, RFIs are usually affected by the widely existing camera exposure while
the robustness of DNNs to the exposure is rarely explored. In this paper, we
study this problem from the viewpoint of adversarial attack and identify a
totally new task, i.e., adversarial exposure attack generating adversarial
images by tuning image exposure to mislead the DNNs with significantly high
transferability. To this end, we first implement a straightforward method,
i.e., multiplicative-perturbation-based exposure attack, and reveal the big
challenges of this new task. Then, to make the adversarial image naturalness,
we propose the adversarial bracketed exposure fusion that regards the exposure
attack as an element-wise bracketed exposure fusion problem in the
Laplacian-pyramid space. Moreover, to realize high transferability, we further
propose the convolutional bracketed exposure fusion where the element-wise
multiplicative operation is extended to the convolution. We validate our method
on the real public DR dataset with the advanced DNNs, e.g., ResNet50,
MobileNet, and EfficientNet, showing our method can achieve high image quality
and success rate of the transfer attack. Our method reveals the potential
threats to the DNN-based DR automated diagnosis and can definitely benefit the
development of exposure-robust automated DR diagnosis method in the future.
Related papers
- RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image Detectors [57.81012948133832]
We present RAID (Robust evaluation of AI-generated image Detectors), a dataset of 72k diverse and highly transferable adversarial examples.<n>Our methodology generates adversarial images that transfer with a high success rate to unseen detectors.<n>Our findings indicate that current state-of-the-art AI-generated image detectors can be easily deceived by adversarial examples.
arXiv Detail & Related papers (2025-06-04T14:16:00Z) - OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception [10.883174135300418]
Adversarial attacks have long been considered the "Achilles' heel" of deep learning.
Here, we investigate how the robustness of DNNs to adversarial attacks has evolved as their accuracy on ImageNet has continued to improve.
arXiv Detail & Related papers (2023-06-05T20:26:17Z) - Improving Classification of Retinal Fundus Image Using Flow Dynamics
Optimized Deep Learning Methods [0.0]
Diabetic Retinopathy (DR) refers to a barrier that takes place in diabetes mellitus damaging the blood vessel network present in the retina.
It can take some time to perform a DR diagnosis using color fundus pictures because experienced clinicians are required to identify the tumors in the imagery used to identify the illness.
arXiv Detail & Related papers (2023-04-29T16:11:34Z) - Towards Understanding and Boosting Adversarial Transferability from a
Distribution Perspective [80.02256726279451]
adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years.
We propose a novel method that crafts adversarial examples by manipulating the distribution of the image.
Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios.
arXiv Detail & Related papers (2022-10-09T09:58:51Z) - 3D Convolutional Neural Networks for Stalled Brain Capillary Detection [72.21315180830733]
Brain vasculature dysfunctions such as stalled blood flow in cerebral capillaries are associated with cognitive decline and pathogenesis in Alzheimer's disease.
Here, we describe a deep learning-based approach for automatic detection of stalled capillaries in brain images based on 3D convolutional neural networks.
In this setting, our approach outperformed other methods and demonstrated state-of-the-art results, achieving 0.85 Matthews correlation coefficient, 85% sensitivity, and 99.3% specificity.
arXiv Detail & Related papers (2021-04-04T20:30:14Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Adversarial Robustness Study of Convolutional Neural Network for Lumbar
Disk Shape Reconstruction from MR images [1.2809525640002362]
In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images.
The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness.
arXiv Detail & Related papers (2021-02-04T20:57:49Z) - Bias Field Poses a Threat to DNN-based X-Ray Recognition [21.317001512826476]
bias field caused by the improper medical image acquisition process widely exists in the chest X-ray images.
In this paper, we study this problem based on the recent adversarial attack and propose a brand new attack.
Our method reveals the potential threat to the DNN-based X-ray automated diagnosis and can definitely benefit the development of bias-field-robust automated diagnosis system.
arXiv Detail & Related papers (2020-09-19T14:58:02Z) - A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading,
and Transferability [76.64661091980531]
People with diabetes are at risk of developing diabetic retinopathy (DR)
Computer-aided DR diagnosis is a promising tool for early detection of DR and severity grading.
This dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists.
arXiv Detail & Related papers (2020-08-22T07:48:04Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.