Adversarial Exposure Attack on Diabetic Retinopathy Imagery
- URL: http://arxiv.org/abs/2009.09231v1
- Date: Sat, 19 Sep 2020 13:47:33 GMT
- Title: Adversarial Exposure Attack on Diabetic Retinopathy Imagery
- Authors: Yupeng Cheng, Felix Juefei-Xu, Qing Guo, Huazhu Fu, Xiaofei Xie,
Shang-Wei Lin, Weisi Lin, Yang Liu
- Abstract summary: Diabetic retinopathy (DR) is a leading cause of vision loss in the world and numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically classify the DR cases via the retinal fundus images (RFIs)
RFIs are usually affected by the widely existing camera exposure while the robustness of DNNs to the exposure is rarely explored.
In this paper, we study this problem from the viewpoint of adversarial attack and identify a totally new task, i.e., adversarial exposure attack generating adversarial images.
- Score: 69.90046859398014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diabetic retinopathy (DR) is a leading cause of vision loss in the world and
numerous cutting-edge works have built powerful deep neural networks (DNNs) to
automatically classify the DR cases via the retinal fundus images (RFIs).
However, RFIs are usually affected by the widely existing camera exposure while
the robustness of DNNs to the exposure is rarely explored. In this paper, we
study this problem from the viewpoint of adversarial attack and identify a
totally new task, i.e., adversarial exposure attack generating adversarial
images by tuning image exposure to mislead the DNNs with significantly high
transferability. To this end, we first implement a straightforward method,
i.e., multiplicative-perturbation-based exposure attack, and reveal the big
challenges of this new task. Then, to make the adversarial image naturalness,
we propose the adversarial bracketed exposure fusion that regards the exposure
attack as an element-wise bracketed exposure fusion problem in the
Laplacian-pyramid space. Moreover, to realize high transferability, we further
propose the convolutional bracketed exposure fusion where the element-wise
multiplicative operation is extended to the convolution. We validate our method
on the real public DR dataset with the advanced DNNs, e.g., ResNet50,
MobileNet, and EfficientNet, showing our method can achieve high image quality
and success rate of the transfer attack. Our method reveals the potential
threats to the DNN-based DR automated diagnosis and can definitely benefit the
development of exposure-robust automated DR diagnosis method in the future.
Related papers
- Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection [90.35085487641773]
Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
arXiv Detail & Related papers (2023-08-29T17:40:57Z) - Training on Foveated Images Improves Robustness to Adversarial Attacks [26.472800216546233]
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks.
RBlur is an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation.
DNNs trained on images transformed by RBlur are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data.
arXiv Detail & Related papers (2023-08-01T21:40:30Z) - Scattering Model Guided Adversarial Examples for SAR Target Recognition:
Attack and Defense [20.477411616398214]
This article explores the domain knowledge of SAR imaging process and proposes a novel Scattering Model Guided Adrial Attack (SMGAA) algorithm.
The proposed SMGAA algorithm can generate adversarial perturbations in the form of electromagnetic scattering response (called adversarial scatterers)
Comprehensive evaluations on the MSTAR dataset show that the adversarial scatterers generated by SMGAA are more robust to perturbations and transformations in the SAR processing chain than the currently studied attacks.
arXiv Detail & Related papers (2022-09-11T03:41:12Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Window-Level is a Strong Denoising Surrogate [0.7251305766151019]
High radiation can be harmful to both patients and operators.
Deep learning-based approaches have been attempted to denoise low dose images.
Self-supervised learning is an emerging alternative for lowering the reference data requirement.
arXiv Detail & Related papers (2021-05-15T07:01:07Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Bias Field Poses a Threat to DNN-based X-Ray Recognition [21.317001512826476]
bias field caused by the improper medical image acquisition process widely exists in the chest X-ray images.
In this paper, we study this problem based on the recent adversarial attack and propose a brand new attack.
Our method reveals the potential threat to the DNN-based X-ray automated diagnosis and can definitely benefit the development of bias-field-robust automated diagnosis system.
arXiv Detail & Related papers (2020-09-19T14:58:02Z) - Vulnerability of deep neural networks for detecting COVID-19 cases from
chest X-ray images to universal adversarial attacks [0.0]
Computer-aided systems based on deep neural networks (DNNs) have been developed to rapidly and accurately detect COVID-19 cases.
We evaluate the vulnerability of DNNs to a single perturbation, called universal adversarial perturbation (UAP)
The results demonstrate that the models are vulnerable to nontargeted and targeted UAPs, even in case of small UAPs.
arXiv Detail & Related papers (2020-05-22T08:54:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.