Bio-Inspired Adversarial Attack Against Deep Neural Networks
- URL: http://arxiv.org/abs/2107.02895v1
- Date: Wed, 30 Jun 2021 03:23:52 GMT
- Title: Bio-Inspired Adversarial Attack Against Deep Neural Networks
- Authors: Bowei Xi and Yujie Chen and Fan Fei and Zhan Tu and Xinyan Deng
- Abstract summary: The paper develops a new adversarial attack against deep neural networks (DNN) based on applying bio-inspired design to moving objects.
To the best of our knowledge, this is the first work to introduce physical attacks with a moving object.
- Score: 28.16483200512112
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The paper develops a new adversarial attack against deep neural networks
(DNN), based on applying bio-inspired design to moving physical objects. To the
best of our knowledge, this is the first work to introduce physical attacks
with a moving object. Instead of following the dominating attack strategy in
the existing literature, i.e., to introduce minor perturbations to a digital
input or a stationary physical object, we show two new successful attack
strategies in this paper. We show by superimposing several patterns onto one
physical object, a DNN becomes confused and picks one of the patterns to assign
a class label. Our experiment with three flapping wing robots demonstrates the
possibility of developing an adversarial camouflage to cause a targeted mistake
by DNN. We also show certain motion can reduce the dependency among consecutive
frames in a video and make an object detector "blind", i.e., not able to detect
an object exists in the video. Hence in a successful physical attack against
DNN, targeted motion against the system should also be considered.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Adversarial alignment: Breaking the trade-off between the strength of an
attack and its relevance to human perception [10.883174135300418]
Adversarial attacks have long been considered the "Achilles' heel" of deep learning.
Here, we investigate how the robustness of DNNs to adversarial attacks has evolved as their accuracy on ImageNet has continued to improve.
arXiv Detail & Related papers (2023-06-05T20:26:17Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Adversarial Detection: Attacking Object Detection in Real Time [10.547024752811437]
This paper presents the first real-time online attack against object detection models.
We devise three attacks that fabricate bounding boxes for nonexistent objects at desired locations.
The attacks achieve a success rate of about 90% within about 20 iterations.
arXiv Detail & Related papers (2022-09-05T13:32:41Z) - Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs [0.0]
In this paper, we demonstrate a novel physical adversarial attack technique called Adrial Zoom Lens (AdvZL)
AdvZL uses a zoom lens to zoom in and out of pictures of the physical world, fooling DNNs without changing the characteristics of the target object.
In a digital environment, we construct a data set based on AdvZL to verify the antagonism of equal-scale enlarged images to DNNs.
In the physical environment, we manipulate the zoom lens to zoom in and out of the target object, and generate adversarial samples.
arXiv Detail & Related papers (2022-06-23T13:03:08Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.