Denial-of-Service Attack on Object Detection Model Using Universal
Adversarial Perturbation
- URL: http://arxiv.org/abs/2205.13618v1
- Date: Thu, 26 May 2022 20:46:28 GMT
- Title: Denial-of-Service Attack on Object Detection Model Using Universal
Adversarial Perturbation
- Authors: Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf
Shabtai
- Abstract summary: NMS-Sponge is a novel approach that negatively affects the decision latency of YOLO, a state-of-the-art object detector.
We demonstrate that the proposed UAP is able to increase the processing time of individual frames by adding "phantom" objects.
- Score: 26.77892878217983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks against deep learning-based object detectors have been
studied extensively in the past few years. The proposed attacks aimed solely at
compromising the models' integrity (i.e., trustworthiness of the model's
prediction), while adversarial attacks targeting the models' availability, a
critical aspect in safety-critical domains such as autonomous driving, have not
been explored by the machine learning research community. In this paper, we
propose NMS-Sponge, a novel approach that negatively affects the decision
latency of YOLO, a state-of-the-art object detector, and compromises the
model's availability by applying a universal adversarial perturbation (UAP). In
our experiments, we demonstrate that the proposed UAP is able to increase the
processing time of individual frames by adding "phantom" objects while
preserving the detection of the original objects.
Related papers
- On the Inherent Robustness of One-Stage Object Detection against Out-of-Distribution Data [6.7236795813629]
We propose a novel detection algorithm for detecting unknown objects in image data.
It exploits supervised dimensionality reduction techniques to mitigate the effects of the curse of dimensionality on the features extracted by the model.
It utilizes high-resolution feature maps to identify potential unknown objects in an unsupervised fashion.
arXiv Detail & Related papers (2024-11-07T10:15:25Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - MEAOD: Model Extraction Attack against Object Detectors [45.817537875368956]
Model extraction attacks allow attackers to replicate a substitute model with comparable functionality to the victim model.
We propose an effective attack method called MEAOD for object detection models.
We achieve an extraction performance of over 70% under the given condition of a 10k query budget.
arXiv Detail & Related papers (2023-12-22T13:28:50Z) - Incremental Object-Based Novelty Detection with Feedback Loop [18.453867533201308]
Object-based Novelty Detection (ND) aims to identify unknown objects that do not belong to classes seen during training.
Traditional approaches to ND focus on one time offline post processing of the pretrained object detection output.
We propose a novel framework for object-based ND, assuming that human feedback can be requested on the predicted output.
arXiv Detail & Related papers (2023-11-15T14:46:20Z) - SAM Meets UAP: Attacking Segment Anything Model With Universal Adversarial Perturbation [61.732503554088524]
We investigate whether it is possible to attack Segment Anything Model (SAM) with image-aversagnostic Universal Adrial Perturbation (UAP)
We propose a novel perturbation-centric framework that results in a UAP generation method based on self-supervised contrastive learning (CL)
The effectiveness of our proposed CL-based UAP generation method is validated by both quantitative and qualitative results.
arXiv Detail & Related papers (2023-10-19T02:49:24Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Adversarial Vulnerability of Temporal Feature Networks for Object
Detection [5.525433572437716]
We study whether temporal feature networks for object detection are vulnerable to universal adversarial attacks.
We evaluate attacks of two types: imperceptible noise for the whole image and locally-bound adversarial patch.
Our experiments on KITTI and nuScenes datasets demonstrate, that a model robustified via K-PGD is able to withstand the studied attacks.
arXiv Detail & Related papers (2022-08-23T07:08:54Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.