Defending Against Person Hiding Adversarial Patch Attack with a
Universal White Frame
- URL: http://arxiv.org/abs/2204.13004v1
- Date: Wed, 27 Apr 2022 15:18:08 GMT
- Title: Defending Against Person Hiding Adversarial Patch Attack with a
Universal White Frame
- Authors: Youngjoon Yu, Hong Joo Lee, Hakmin Lee, and Yong Man Ro
- Abstract summary: High-performance object detection networks are vulnerable to adversarial patch attacks.
Person-hiding attacks are emerging as a serious problem in many safety-critical applications.
We propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns.
- Score: 28.128458352103543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection has attracted great attention in the computer vision area
and has emerged as an indispensable component in many vision systems. In the
era of deep learning, many high-performance object detection networks have been
proposed. Although these detection networks show high performance, they are
vulnerable to adversarial patch attacks. Changing the pixels in a restricted
region can easily fool the detection network in the physical world. In
particular, person-hiding attacks are emerging as a serious problem in many
safety-critical applications such as autonomous driving and surveillance
systems. Although it is necessary to defend against an adversarial patch
attack, very few efforts have been dedicated to defending against person-hiding
attacks. To tackle the problem, in this paper, we propose a novel defense
strategy that mitigates a person-hiding attack by optimizing defense patterns,
while previous methods optimize the model. In the proposed method, a
frame-shaped pattern called a 'universal white frame' (UWF) is optimized and
placed on the outside of the image. To defend against adversarial patch
attacks, UWF should have three properties (i) suppressing the effect of the
adversarial patch, (ii) maintaining its original prediction, and (iii)
applicable regardless of images. To satisfy the aforementioned properties, we
propose a novel pattern optimization algorithm that can defend against the
adversarial patch. Through comprehensive experiments, we demonstrate that the
proposed method effectively defends against the adversarial patch attack.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Fight Fire with Fire: Combating Adversarial Patch Attacks using
Pattern-randomized Defensive Patches [12.947503245230866]
We propose a novel and general methodology for defending adversarial attacks.
We inject two types of defensive patches, canary and woodpecker, into the input to proactively probe or weaken potential adversarial patches.
The effectiveness and practicality of the proposed method are demonstrated through comprehensive experiments.
arXiv Detail & Related papers (2023-11-10T15:36:57Z) - A Random-patch based Defense Strategy Against Physical Attacks for Face
Recognition Systems [3.6202815454709536]
We propose a random-patch based defense strategy to robustly detect physical attacks for Face Recognition System (FRS)
Our method can be easily applied to the real world face recognition system and extended to other defense methods to boost the detection performance.
arXiv Detail & Related papers (2023-04-16T16:11:56Z) - CBA: Contextual Background Attack against Optical Aerial Detection in
the Physical World [8.826711009649133]
Patch-based physical attacks have increasingly aroused concerns.
Most existing methods focus on obscuring targets captured on the ground, and some of these methods are simply extended to deceive aerial detectors.
We propose Contextual Background Attack (CBA), a novel physical attack framework against aerial detection, which can achieve strong attack efficacy and transferability in the physical world even without smudging the interested objects at all.
arXiv Detail & Related papers (2023-02-27T05:10:27Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - PatchGuard: A Provably Robust Defense against Adversarial Patches via
Small Receptive Fields and Masking [46.03749650789915]
Localized adversarial patches aim to induce misclassification in machine learning models by arbitrarily modifying pixels within a restricted region of an image.
We propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.
arXiv Detail & Related papers (2020-05-17T03:38:34Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.