Adversarial Medical Image with Hierarchical Feature Hiding
- URL: http://arxiv.org/abs/2312.01679v1
- Date: Mon, 4 Dec 2023 07:04:20 GMT
- Title: Adversarial Medical Image with Hierarchical Feature Hiding
- Authors: Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng,
and S. Kevin Zhou
- Abstract summary: adversarial examples (AEs) pose a great security flaw in deep learning based methods for medical images.
It has been discovered that conventional adversarial attacks like PGD are easy to distinguish in the feature space, resulting in accurate reactive defenses.
We propose a simple-yet-effective hierarchical feature constraint (HFC), a novel add-on to conventional white-box attacks, which assists to hide the adversarial feature in the target feature distribution.
- Score: 38.551147309335185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning based methods for medical images can be easily compromised by
adversarial examples (AEs), posing a great security flaw in clinical
decision-making. It has been discovered that conventional adversarial attacks
like PGD which optimize the classification logits, are easy to distinguish in
the feature space, resulting in accurate reactive defenses. To better
understand this phenomenon and reassess the reliability of the reactive
defenses for medical AEs, we thoroughly investigate the characteristic of
conventional medical AEs. Specifically, we first theoretically prove that
conventional adversarial attacks change the outputs by continuously optimizing
vulnerable features in a fixed direction, thereby leading to outlier
representations in the feature space. Then, a stress test is conducted to
reveal the vulnerability of medical images, by comparing with natural images.
Interestingly, this vulnerability is a double-edged sword, which can be
exploited to hide AEs. We then propose a simple-yet-effective hierarchical
feature constraint (HFC), a novel add-on to conventional white-box attacks,
which assists to hide the adversarial feature in the target feature
distribution. The proposed method is evaluated on three medical datasets, both
2D and 3D, with different modalities. The experimental results demonstrate the
superiority of HFC, \emph{i.e.,} it bypasses an array of state-of-the-art
adversarial medical AE detectors more efficiently than competing adaptive
attacks, which reveals the deficiencies of medical reactive defense and allows
to develop more robust defenses in future.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - Toward Robust Diagnosis: A Contour Attention Preserving Adversarial
Defense for COVID-19 Detection [10.953610196636784]
We propose a Contour Attention Preserving (CAP) method based on lung cavity edge extraction.
Experimental results indicate that the proposed method achieves state-of-the-art performance in multiple adversarial defense and generalization tasks.
arXiv Detail & Related papers (2022-11-30T08:01:23Z) - Adversarial Visual Robustness by Causal Intervention [56.766342028800445]
Adversarial training is the de facto most promising defense against adversarial examples.
Yet, its passive nature inevitably prevents it from being immune to unknown attackers.
We provide a causal viewpoint of adversarial vulnerability: the cause is the confounder ubiquitously existing in learning.
arXiv Detail & Related papers (2021-06-17T14:23:54Z) - MixDefense: A Defense-in-Depth Framework for Adversarial Example
Detection Based on Statistical and Semantic Analysis [14.313178290347293]
We propose a multilayer defense-in-depth framework for AE detection, namely MixDefense.
We leverage the noise' features extracted from the inputs to discover the statistical difference between natural images and tampered ones for AE detection.
We show that the proposed MixDefense solution outperforms the existing AE detection techniques by a considerable margin.
arXiv Detail & Related papers (2021-04-20T15:57:07Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - A Hierarchical Feature Constraint to Camouflage Medical Adversarial
Attacks [31.650769109900477]
We investigate the intrinsic characteristic of medical adversarial attacks in feature space.
We propose a novel hierarchical feature constraint (HFC) as an add-on to existing adversarial attacks.
We evaluate the proposed method on two public medical image datasets.
arXiv Detail & Related papers (2020-12-17T11:00:02Z) - SLAP: Improving Physical Adversarial Examples with Short-Lived
Adversarial Perturbations [19.14079118174123]
Short-Lived Adrial Perturbations (SLAP) is a novel technique that allows adversaries to realize physically robust real-world AE by using a light projector.
SLAP allows the adversary greater control over the attack compared to adversarial patches.
We study the feasibility of SLAP in the self-driving scenario, targeting both object detector and traffic sign recognition tasks.
arXiv Detail & Related papers (2020-07-08T14:11:21Z) - Towards Understanding the Adversarial Vulnerability of Skeleton-based
Action Recognition [133.35968094967626]
Skeleton-based action recognition has attracted increasing attention due to its strong adaptability to dynamic circumstances.
With the help of deep learning techniques, it has also witnessed substantial progress and currently achieved around 90% accuracy in benign environment.
Research on the vulnerability of skeleton-based action recognition under different adversarial settings remains scant.
arXiv Detail & Related papers (2020-05-14T17:12:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.