DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches
Generation
- URL: http://arxiv.org/abs/2312.16907v1
- Date: Thu, 28 Dec 2023 08:58:13 GMT
- Title: DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches
Generation
- Authors: Wenyi Tan, Yang Li, Chenxing Zhao, Zhunga Liu, and Quan Pan
- Abstract summary: We introduce the concept of energy and treat the adversarial patches generation process as an optimization of the adversarial patches to minimize the total energy of the person'' category.
By adopting adversarial training, we construct a dynamically optimized ensemble model.
We carried out six sets of comparative experiments and tested our algorithm on five mainstream object detection models.
- Score: 12.995762461474856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection is a fundamental task in various applications ranging from
autonomous driving to intelligent security systems. However, recognition of a
person can be hindered when their clothing is decorated with carefully designed
graffiti patterns, leading to the failure of object detection. To achieve
greater attack potential against unknown black-box models, adversarial patches
capable of affecting the outputs of multiple-object detection models are
required. While ensemble models have proven effective, current research in the
field of object detection typically focuses on the simple fusion of the outputs
of all models, with limited attention being given to developing general
adversarial patches that can function effectively in the physical world. In
this paper, we introduce the concept of energy and treat the adversarial
patches generation process as an optimization of the adversarial patches to
minimize the total energy of the ``person'' category. Additionally, by adopting
adversarial training, we construct a dynamically optimized ensemble model.
During training, the weight parameters of the attacked target models are
adjusted to find the balance point at which the generated adversarial patches
can effectively attack all target models. We carried out six sets of
comparative experiments and tested our algorithm on five mainstream object
detection models. The adversarial patches generated by our algorithm can reduce
the recognition accuracy of YOLOv2 and YOLOv3 to 13.19\% and 29.20\%,
respectively. In addition, we conducted experiments to test the effectiveness
of T-shirts covered with our adversarial patches in the physical world and
could achieve that people are not recognized by the object detection model.
Finally, leveraging the Grad-CAM tool, we explored the attack mechanism of
adversarial patches from an energetic perspective.
Related papers
- Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in
the Physical World [11.24237636482709]
We design a unified adversarial patch that can perform cross-modal physical attacks, achieving evasion in both modalities simultaneously with a single patch.
We propose a novel boundary-limited shape optimization approach that aims to achieve compact and smooth shapes for the adversarial patch.
Our method is evaluated against several state-of-the-art object detectors, achieving an Attack Success Rate (ASR) of over 80%.
arXiv Detail & Related papers (2023-07-27T08:14:22Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Defensive Patches for Robust Recognition in the Physical World [111.46724655123813]
Data-end defense improves robustness by operations on input data instead of modifying models.
Previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models.
We propose a defensive patch generation framework to address these problems by helping models better exploit these features.
arXiv Detail & Related papers (2022-04-13T07:34:51Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Class-Aware Robust Adversarial Training for Object Detection [12.600009462416663]
We present a novel class-aware robust adversarial training paradigm for the object detection task.
For a given image, the proposed approach generates an universal adversarial perturbation to simultaneously attack all the occurred objects in the image.
The proposed approach decomposes the total loss into class-wise losses and normalizes each class loss using the number of objects for the class.
arXiv Detail & Related papers (2021-03-30T08:02:28Z) - Voting based ensemble improves robustness of defensive models [82.70303474487105]
We study whether it is possible to create an ensemble to further improve robustness.
By ensembling several state-of-the-art pre-trained defense models, our method can achieve a 59.8% robust accuracy.
arXiv Detail & Related papers (2020-11-28T00:08:45Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.