Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based
on Diffusion Model for Object Detector
- URL: http://arxiv.org/abs/2307.08076v1
- Date: Sun, 16 Jul 2023 15:22:30 GMT
- Title: Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based
on Diffusion Model for Object Detector
- Authors: Shuo-Yen Lin, Ernie Chu, Che-Hsien Lin, Jun-Cheng Chen, Jia-Ching Wang
- Abstract summary: We propose a novel naturalistic adversarial patch generation method based on the diffusion models (DM)
We are the first to propose DM-based naturalistic adversarial patch generation for object detectors.
- Score: 18.021582628066554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many physical adversarial patch generation methods are widely proposed to
protect personal privacy from malicious monitoring using object detectors.
However, they usually fail to generate satisfactory patch images in terms of
both stealthiness and attack performance without making huge efforts on careful
hyperparameter tuning. To address this issue, we propose a novel naturalistic
adversarial patch generation method based on the diffusion models (DM). Through
sampling the optimal image from the DM model pretrained upon natural images, it
allows us to stably craft high-quality and naturalistic physical adversarial
patches to humans without suffering from serious mode collapse problems as
other deep generative models. To the best of our knowledge, we are the first to
propose DM-based naturalistic adversarial patch generation for object
detectors. With extensive quantitative, qualitative, and subjective
experiments, the results demonstrate the effectiveness of the proposed approach
to generate better-quality and more naturalistic adversarial patches while
achieving acceptable attack performance than other state-of-the-art patch
generation methods. We also show various generation trade-offs under different
conditions.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation [12.995762461474856]
We introduce the concept of energy and treat the adversarial patches generation process as an optimization of the adversarial patches to minimize the total energy of the person'' category.
By adopting adversarial training, we construct a dynamically optimized ensemble model.
We carried out six sets of comparative experiments and tested our algorithm on five mainstream object detection models.
arXiv Detail & Related papers (2023-12-28T08:58:13Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - LEAT: Towards Robust Deepfake Disruption in Real-World Scenarios via
Latent Ensemble Attack [11.764601181046496]
Deepfakes, malicious visual contents created by generative models, pose an increasingly harmful threat to society.
To proactively mitigate deepfake damages, recent studies have employed adversarial perturbation to disrupt deepfake model outputs.
We propose a simple yet effective disruption method called Latent Ensemble ATtack (LEAT), which attacks the independent latent encoding process.
arXiv Detail & Related papers (2023-07-04T07:00:37Z) - Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
and Controllability [62.105715985563656]
We propose a novel framework dubbed Diffusion-Based Projected Gradient Descent (Diff-PGD) for generating realistic adversarial samples.
Our framework can be easily customized for specific tasks such as digital attacks, physical-world attacks, and style-based attacks.
arXiv Detail & Related papers (2023-05-25T21:51:23Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Feasibility of Inconspicuous GAN-generated Adversarial Patches against
Object Detection [3.395452700023097]
In this work, we have evaluated the existing approaches to generate inconspicuous patches.
We have evaluated two approaches to generate naturalistic patches: by incorporating patch generation into the GAN training process and by using the pretrained GAN.
Our experiments have shown, that using a pre-trained GAN helps to gain realistic-looking patches while preserving the performance similar to conventional adversarial patches.
arXiv Detail & Related papers (2022-07-15T08:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.