Object-fabrication Targeted Attack for Object Detection
- URL: http://arxiv.org/abs/2212.06431v2
- Date: Wed, 14 Dec 2022 07:33:12 GMT
- Title: Object-fabrication Targeted Attack for Object Detection
- Authors: Xuchong Zhang, Changfeng Sun, Haoliang Han, Hang Wang, Hongbin Sun and
Nanning Zheng
- Abstract summary: adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
- Score: 54.10697546734503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent researches show that the deep learning based object detection is
vulnerable to adversarial examples. Generally, the adversarial attack for
object detection contains targeted attack and untargeted attack. According to
our detailed investigations, the research on the former is relatively fewer
than the latter and all the existing methods for the targeted attack follow the
same mode, i.e., the object-mislabeling mode that misleads detectors to
mislabel the detected object as a specific wrong label. However, this mode has
limited attack success rate, universal and generalization performances. In this
paper, we propose a new object-fabrication targeted attack mode which can
mislead detectors to `fabricate' extra false objects with specific target
labels. Furthermore, we design a dual attention based targeted feature space
attack method to implement the proposed targeted attack mode. The attack
performances of the proposed mode and method are evaluated on MS COCO and
BDD100K datasets using FasterRCNN and YOLOv5. Evaluation results demonstrate
that, the proposed object-fabrication targeted attack mode and the
corresponding targeted feature space attack method show significant
improvements in terms of image-specific attack, universal performance and
generalization capability, compared with the previous targeted attack for
object detection. Code will be made available.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection [15.491101949025651]
This paper proposes a general multi-label adversarial attack framework, namely muLti-labEl adverSarial falSe data injectiON attack (LESSON)
Four typical LESSON attacks based on the proposed framework and two dimensions of attack objectives are examined.
arXiv Detail & Related papers (2024-01-29T09:44:59Z) - Ensemble-based Blackbox Attacks on Dense Prediction [16.267479602370543]
We show that a carefully designed ensemble can create effective attacks for a number of victim models.
In particular, we show that normalization of the weights for individual models plays a critical role in the success of the attacks.
Our proposed method can also generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously.
arXiv Detail & Related papers (2023-03-25T00:08:03Z) - GLOW: Global Layout Aware Attacks for Object Detection [27.46902978168904]
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results.
We present first approach that copes with various attack requests by generating global layout-aware adversarial attacks.
In experiment, we design multiple types of attack requests and validate our ideas on MS validation set.
arXiv Detail & Related papers (2023-02-27T22:01:34Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection [38.813947369401525]
We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
arXiv Detail & Related papers (2020-02-10T04:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.