Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks
- URL: http://arxiv.org/abs/2404.15881v1
- Date: Wed, 24 Apr 2024 13:51:56 GMT
- Title: Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks
- Authors: Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee,
- Abstract summary: "steal now, later" attacks can be employed to exploit potential vulnerabilities in the AI service.
The average cost of each attack is less than $ 1 dollars, posing a significant threat to AI security.
- Score: 47.9744734181236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latency attacks against object detection represent a variant of adversarial attacks that aim to inflate the inference time by generating additional ghost objects in a target image. However, generating ghost objects in the black-box scenario remains a challenge since information about these unqualified objects remains opaque. In this study, we demonstrate the feasibility of generating ghost objects in adversarial examples by extending the concept of "steal now, decrypt later" attacks. These adversarial examples, once produced, can be employed to exploit potential vulnerabilities in the AI service, giving rise to significant security concerns. The experimental results demonstrate that the proposed attack achieves successful attacks across various commonly used models and Google Vision API without any prior knowledge about the target model. Additionally, the average cost of each attack is less than \$ 1 dollars, posing a significant threat to AI security.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Mask-based Invisible Backdoor Attacks on Object Detection [0.0]
Deep learning models are vulnerable to backdoor attacks.
In this study, we propose an effective invisible backdoor attack on object detection utilizing a mask-based approach.
arXiv Detail & Related papers (2024-03-20T12:27:30Z) - Impart: An Imperceptible and Effective Label-Specific Backdoor Attack [15.859650783567103]
We propose a novel imperceptible backdoor attack framework, named Impart, in the scenario where the attacker has no access to the victim model.
Specifically, in order to enhance the attack capability of the all-to-all setting, we first propose a label-specific attack.
arXiv Detail & Related papers (2024-03-18T07:22:56Z) - Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection [24.271795745084123]
Deep neural networks (DNNs) have shown unprecedented success in object detection tasks.
Backdoor attacks on object detection tasks have not been properly investigated and explored.
We propose a simple yet effective backdoor attack method against object detection without modifying the ground truth annotations.
arXiv Detail & Related papers (2023-07-19T22:46:35Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - GLOW: Global Layout Aware Attacks for Object Detection [27.46902978168904]
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results.
We present first approach that copes with various attack requests by generating global layout-aware adversarial attacks.
In experiment, we design multiple types of attack requests and validate our ideas on MS validation set.
arXiv Detail & Related papers (2023-02-27T22:01:34Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.