Bias-based Universal Adversarial Patch Attack for Automatic Check-out
- URL: http://arxiv.org/abs/2005.09257v3
- Date: Mon, 3 Aug 2020 13:06:03 GMT
- Title: Bias-based Universal Adversarial Patch Attack for Automatic Check-out
- Authors: Aishan Liu, Jiakai Wang, Xianglong Liu, Bowen Cao, Chongzhi Zhang,
Hang Yu
- Abstract summary: Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
- Score: 59.355948824578434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are inputs with imperceptible perturbations that easily
misleading deep neural networks(DNNs). Recently, adversarial patch, with noise
confined to a small and localized patch, has emerged for its easy feasibility
in real-world scenarios. However, existing strategies failed to generate
adversarial patches with strong generalization ability. In other words, the
adversarial patches were input-specific and failed to attack images from all
classes, especially unseen ones during training. To address the problem, this
paper proposes a bias-based framework to generate class-agnostic universal
adversarial patches with strong generalization ability, which exploits both the
perceptual and semantic bias of models. Regarding the perceptual bias, since
DNNs are strongly biased towards textures, we exploit the hard examples which
convey strong model uncertainties and extract a textural patch prior from them
by adopting the style similarities. The patch prior is more close to decision
boundaries and would promote attacks. To further alleviate the heavy dependency
on large amounts of data in training universal attacks, we further exploit the
semantic bias. As the class-wise preference, prototypes are introduced and
pursued by maximizing the multi-class margin to help universal training. Taking
AutomaticCheck-out (ACO) as the typical scenario, extensive experiments
including white-box and black-box settings in both digital-world(RPC, the
largest ACO related dataset) and physical-world scenario(Taobao and JD, the
world' s largest online shopping platforms) are conducted. Experimental results
demonstrate that our proposed framework outperforms state-of-the-art
adversarial patch attack methods.
Related papers
- BB-Patch: BlackBox Adversarial Patch-Attack using Zeroth-Order Optimization [10.769992215544358]
Adversarial attack strategies assume that the adversary has access to the training data, the model parameters, and the input during deployment.
We propose an black-box adversarial attack strategy that produces adversarial patches which can be applied anywhere in the input image to perform an adversarial attack.
arXiv Detail & Related papers (2024-05-09T18:42:26Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Defensive Patches for Robust Recognition in the Physical World [111.46724655123813]
Data-end defense improves robustness by operations on input data instead of modifying models.
Previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models.
We propose a defensive patch generation framework to address these problems by helping models better exploit these features.
arXiv Detail & Related papers (2022-04-13T07:34:51Z) - Generative Dynamic Patch Attack [6.1863763890100065]
We propose an end-to-end patch attack algorithm, Generative Dynamic Patch Attack (GDPA)
GDPA generates both patch pattern and patch location adversarially for each input image.
Experiments on VGGFace, Traffic Sign and ImageNet show that GDPA achieves higher attack success rates than state-of-the-art patch attacks.
arXiv Detail & Related papers (2021-11-08T04:15:34Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Decision-based Universal Adversarial Attack [55.76371274622313]
In black-box setting, current universal adversarial attack methods utilize substitute models to generate the perturbation.
We propose an efficient Decision-based Universal Attack (DUAttack)
The effectiveness of DUAttack is validated through comparisons with other state-of-the-art attacks.
arXiv Detail & Related papers (2020-09-15T12:49:03Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.