A Hierarchical Feature Constraint to Camouflage Medical Adversarial
Attacks
- URL: http://arxiv.org/abs/2012.09501v1
- Date: Thu, 17 Dec 2020 11:00:02 GMT
- Title: A Hierarchical Feature Constraint to Camouflage Medical Adversarial
Attacks
- Authors: Qingsong Yao, Zecheng He, Yi Lin, Kai Ma, Yefeng Zheng and S. Kevin
Zhou
- Abstract summary: We investigate the intrinsic characteristic of medical adversarial attacks in feature space.
We propose a novel hierarchical feature constraint (HFC) as an add-on to existing adversarial attacks.
We evaluate the proposed method on two public medical image datasets.
- Score: 31.650769109900477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) for medical images are extremely vulnerable to
adversarial examples (AEs), which poses security concerns on clinical decision
making. Luckily, medical AEs are also easy to detect in hierarchical feature
space per our study herein. To better understand this phenomenon, we thoroughly
investigate the intrinsic characteristic of medical AEs in feature space,
providing both empirical evidence and theoretical explanations for the
question: why are medical adversarial attacks easy to detect? We first perform
a stress test to reveal the vulnerability of deep representations of medical
images, in contrast to natural images. We then theoretically prove that typical
adversarial attacks to binary disease diagnosis network manipulate the
prediction by continuously optimizing the vulnerable representations in a fixed
direction, resulting in outlier features that make medical AEs easy to detect.
However, this vulnerability can also be exploited to hide the AEs in the
feature space. We propose a novel hierarchical feature constraint (HFC) as an
add-on to existing adversarial attacks, which encourages the hiding of the
adversarial representation within the normal feature distribution. We evaluate
the proposed method on two public medical image datasets, namely {Fundoscopy}
and {Chest X-Ray}. Experimental results demonstrate the superiority of our
adversarial attack method as it bypasses an array of state-of-the-art
adversarial detectors more easily than competing attack methods, supporting
that the great vulnerability of medical features allows an attacker more room
to manipulate the adversarial representations.
Related papers
- Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Adversarial Medical Image with Hierarchical Feature Hiding [38.551147309335185]
adversarial examples (AEs) pose a great security flaw in deep learning based methods for medical images.
It has been discovered that conventional adversarial attacks like PGD are easy to distinguish in the feature space, resulting in accurate reactive defenses.
We propose a simple-yet-effective hierarchical feature constraint (HFC), a novel add-on to conventional white-box attacks, which assists to hide the adversarial feature in the target feature distribution.
arXiv Detail & Related papers (2023-12-04T07:04:20Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges [64.63744409431001]
We present a comprehensive survey on advances in adversarial attacks and defenses for medical image analysis.
For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models.
arXiv Detail & Related papers (2023-03-24T16:38:58Z) - Toward Robust Diagnosis: A Contour Attention Preserving Adversarial
Defense for COVID-19 Detection [10.953610196636784]
We propose a Contour Attention Preserving (CAP) method based on lung cavity edge extraction.
Experimental results indicate that the proposed method achieves state-of-the-art performance in multiple adversarial defense and generalization tasks.
arXiv Detail & Related papers (2022-11-30T08:01:23Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - State-of-the-art segmentation network fooled to segment a heart symbol
in chest X-Ray images [5.808118248166566]
Adrial attacks consist in maliciously changing the input data to mislead the predictions of automated decision systems.
We studied the effectiveness of adversarial attacks in targeted modification of segmentations of anatomical structures in chest X-rays.
arXiv Detail & Related papers (2021-03-31T22:20:59Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.