Bias Field Poses a Threat to DNN-based X-Ray Recognition
- URL: http://arxiv.org/abs/2009.09247v2
- Date: Mon, 3 May 2021 04:00:36 GMT
- Title: Bias Field Poses a Threat to DNN-based X-Ray Recognition
- Authors: Binyu Tian, Qing Guo, Felix Juefei-Xu, Wen Le Chan, Yupeng Cheng,
Xiaohong Li, Xiaofei Xie, Shengchao Qin
- Abstract summary: bias field caused by the improper medical image acquisition process widely exists in the chest X-ray images.
In this paper, we study this problem based on the recent adversarial attack and propose a brand new attack.
Our method reveals the potential threat to the DNN-based X-ray automated diagnosis and can definitely benefit the development of bias-field-robust automated diagnosis system.
- Score: 21.317001512826476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The chest X-ray plays a key role in screening and diagnosis of many lung
diseases including the COVID-19. More recently, many works construct deep
neural networks (DNNs) for chest X-ray images to realize automated and
efficient diagnosis of lung diseases. However, bias field caused by the
improper medical image acquisition process widely exists in the chest X-ray
images while the robustness of DNNs to the bias field is rarely explored, which
definitely poses a threat to the X-ray-based automated diagnosis system. In
this paper, we study this problem based on the recent adversarial attack and
propose a brand new attack, i.e., the adversarial bias field attack where the
bias field instead of the additive noise works as the adversarial perturbations
for fooling the DNNs. This novel attack posts a key problem: how to locally
tune the bias field to realize high attack success rate while maintaining its
spatial smoothness to guarantee high realisticity. These two goals contradict
each other and thus has made the attack significantly challenging. To overcome
this challenge, we propose the adversarial-smooth bias field attack that can
locally tune the bias field with joint smooth & adversarial constraints. As a
result, the adversarial X-ray images can not only fool the DNNs effectively but
also retain very high level of realisticity. We validate our method on real
chest X-ray datasets with powerful DNNs, e.g., ResNet50, DenseNet121, and
MobileNet, and show different properties to the state-of-the-art attacks in
both image realisticity and attack transferability. Our method reveals the
potential threat to the DNN-based X-ray automated diagnosis and can definitely
benefit the development of bias-field-robust automated diagnosis system.
Related papers
- X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item
Detection [113.10386151761682]
Adversarial attacks targeting texture-free X-ray images are underexplored.
In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection.
We propose X-Adv to generate physically printable metals that act as an adversarial agent capable of deceiving X-ray detectors.
arXiv Detail & Related papers (2023-02-19T06:31:17Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models [8.853343040790795]
Jekyll is a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition.
We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes.
We also investigate defensive measures based on machine learning to detect images generated by Jekyll.
arXiv Detail & Related papers (2021-04-05T18:23:36Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading [75.73437831338907]
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world.
To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs)
RFIs are commonly affected by camera exposure issues that may lead to incorrect grades.
In this paper, we study this problem from the viewpoint of adversarial attacks.
arXiv Detail & Related papers (2020-09-19T13:47:33Z) - Vulnerability of deep neural networks for detecting COVID-19 cases from
chest X-ray images to universal adversarial attacks [0.0]
Computer-aided systems based on deep neural networks (DNNs) have been developed to rapidly and accurately detect COVID-19 cases.
We evaluate the vulnerability of DNNs to a single perturbation, called universal adversarial perturbation (UAP)
The results demonstrate that the models are vulnerable to nontargeted and targeted UAPs, even in case of small UAPs.
arXiv Detail & Related papers (2020-05-22T08:54:41Z) - SODA: Detecting Covid-19 in Chest X-rays with Semi-supervised Open Set
Domain Adaptation [5.6070625920019825]
We propose a novel domain adaptation method, Semi-supervised Open set Domain Adversarial network (SODA)
SODA achieves a leading classification performance compared with recent state-of-the-art models in separating COVID-19 with common pneumonia.
We also present initial results showing that SODA can produce better pathology localizations in the chest x-rays.
arXiv Detail & Related papers (2020-05-22T04:58:28Z) - A Thorough Comparison Study on Adversarial Attacks and Defenses for
Common Thorax Disease Classification in Chest X-rays [63.675522663422896]
We review various adversarial attack and defense methods on chest X-rays.
We find that the attack and defense methods have poor performance with excessive iterations and large perturbations.
We propose a new defense method that is robust to different degrees of perturbations.
arXiv Detail & Related papers (2020-03-31T06:21:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.