Using Frequency Attention to Make Adversarial Patch Powerful Against
Person Detector
- URL: http://arxiv.org/abs/2205.04638v2
- Date: Wed, 11 May 2022 13:41:29 GMT
- Title: Using Frequency Attention to Make Adversarial Patch Powerful Against
Person Detector
- Authors: Xiaochun Lei, Chang Lu, Zetao Jiang, Zhaoting Gong, Xiang Cai, Linjun
Lu
- Abstract summary: This paper proposes a Frequency Module(FRAN), a frequency-domain attention module for guiding patch generation.
Our method increases the attack success rates of small and medium targets by 4.18% and 3.89%, respectively, over the state-of-the-art attack method.
- Score: 2.5766957676786006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are vulnerable to adversarial attacks. In
particular, object detectors may be attacked by applying a particular
adversarial patch to the image. However, because the patch shrinks during
preprocessing, most existing approaches that employ adversarial patches to
attack object detectors would diminish the attack success rate on small and
medium targets. This paper proposes a Frequency Module(FRAN), a
frequency-domain attention module for guiding patch generation. This is the
first study to introduce frequency domain attention to optimize the attack
capabilities of adversarial patches. Our method increases the attack success
rates of small and medium targets by 4.18% and 3.89%, respectively, over the
state-of-the-art attack method for fooling the human detector while assaulting
YOLOv3 without reducing the attack success rate of big targets.
Related papers
- I Don't Know You, But I Can Catch You: Real-Time Defense against Diverse Adversarial Patches for Object Detectors [12.790316371521477]
We propose textitNutNet, an innovative model for detecting adversarial patches, with high generalization, robustness and efficiency.
Our method exhibits an average defense performance that is over 2.4 times and 4.7 times higher than existing approaches for HA and AA, respectively.
arXiv Detail & Related papers (2024-06-12T09:16:19Z) - An Invisible Backdoor Attack Based On Semantic Feature [0.0]
Backdoor attacks have severely threatened deep neural network (DNN) models in the past several years.
We propose a novel backdoor attack, making imperceptible changes.
We evaluate our attack on three prominent image classification datasets.
arXiv Detail & Related papers (2024-05-19T13:50:40Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - RPATTACK: Refined Patch Attack on General Object Detectors [31.28929190510979]
We propose a novel patch-based method for attacking general object detectors.
Our RPAttack can achieve an amazing missed detection rate of 100% for both Yolo v4 and Faster R-CNN.
arXiv Detail & Related papers (2021-03-23T11:45:41Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - Detection of Iterative Adversarial Attacks via Counter Attack [4.549831511476249]
Deep neural networks (DNNs) have proven to be powerful tools for processing unstructured data.
For high-dimensional data, like images, they are inherently vulnerable to adversarial attacks.
In this work we outline a mathematical proof that the CW attack can be used as a detector itself.
arXiv Detail & Related papers (2020-09-23T21:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.