Adversarial Detection: Attacking Object Detection in Real Time
- URL: http://arxiv.org/abs/2209.01962v6
- Date: Tue, 12 Dec 2023 11:27:29 GMT
- Title: Adversarial Detection: Attacking Object Detection in Real Time
- Authors: Han Wu, Syed Yunas, Sareh Rowlands, Wenjie Ruan, and Johan Wahlstrom
- Abstract summary: This paper presents the first real-time online attack against object detection models.
We devise three attacks that fabricate bounding boxes for nonexistent objects at desired locations.
The attacks achieve a success rate of about 90% within about 20 iterations.
- Score: 10.547024752811437
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Intelligent robots rely on object detection models to perceive the
environment. Following advances in deep learning security it has been revealed
that object detection models are vulnerable to adversarial attacks. However,
prior research primarily focuses on attacking static images or offline videos.
Therefore, it is still unclear if such attacks could jeopardize real-world
robotic applications in dynamic environments. This paper bridges this gap by
presenting the first real-time online attack against object detection models.
We devise three attacks that fabricate bounding boxes for nonexistent objects
at desired locations. The attacks achieve a success rate of about 90% within
about 20 iterations. The demo video is available at
https://youtu.be/zJZ1aNlXsMU.
Related papers
- Mask-based Invisible Backdoor Attacks on Object Detection [0.0]
Deep learning models are vulnerable to backdoor attacks.
In this study, we propose an effective invisible backdoor attack on object detection utilizing a mask-based approach.
arXiv Detail & Related papers (2024-03-20T12:27:30Z) - Patch of Invisibility: Naturalistic Physical Black-Box Adversarial Attacks on Object Detectors [0.0]
We propose a direct, black-box, gradient-free method to generate naturalistic physical adversarial patches for object detectors.
To our knowledge this is the first and only method that performs black-box physical attacks directly on object-detection models.
arXiv Detail & Related papers (2023-03-07T21:03:48Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - BadDet: Backdoor Attacks on Object Detection [42.40418007499009]
We propose four kinds of backdoor attacks for object detection task.
A trigger can falsely generate an object of the target class.
A single trigger can change the predictions of all objects in an image to the target class.
arXiv Detail & Related papers (2022-05-28T18:02:11Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object
Detectors in the Physical World [20.385028861767218]
This work demonstrates that existing object detectors are inherently susceptible to physical backdoor attacks.
We show that such a backdoor can be implanted from two exploitable attack scenarios into the object detector.
We evaluate three popular object detection algorithms: anchor-based Yolo-V3, Yolo-V4, and anchor-free CenterNet.
arXiv Detail & Related papers (2022-01-21T10:11:27Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.