Natural Adversarial Patch Generation Method Based on Latent Diffusion
Model
- URL: http://arxiv.org/abs/2312.16401v1
- Date: Wed, 27 Dec 2023 04:09:44 GMT
- Title: Natural Adversarial Patch Generation Method Based on Latent Diffusion
Model
- Authors: Xianyi Chen and Fazhan Liu and Dong Jiang and Kai Yan
- Abstract summary: This paper proposes a novel adversarial patch method called the Latent Diffusion Patch (LDP)
It polishes the patches and images through the powerful natural abilities of diffusion models, making them more acceptable to the human visual system.
Experimental results, both digital and physical worlds, show that LDPs achieve a visual subjectivity score of 87.3%, while still maintaining effective attack capabilities.
- Score: 2.5879126618627204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, some research show that deep neural networks are vulnerable to the
adversarial attacks, the well-trainned samples or patches could be used to
trick the neural network detector or human visual perception. However, these
adversarial patches, with their conspicuous and unusual patterns, lack
camouflage and can easily raise suspicion in the real world. To solve this
problem, this paper proposed a novel adversarial patch method called the Latent
Diffusion Patch (LDP), in which, a pretrained encoder is first designed to
compress the natural images into a feature space with key characteristics. Then
trains the diffusion model using the above feature space. Finally, explore the
latent space of the pretrained diffusion model using the image denoising
technology. It polishes the patches and images through the powerful natural
abilities of diffusion models, making them more acceptable to the human visual
system. Experimental results, both digital and physical worlds, show that LDPs
achieve a visual subjectivity score of 87.3%, while still maintaining effective
attack capabilities.
Related papers
- DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination [5.7254228484416325]
DiffPAD is a novel framework that harnesses the power of diffusion models for adversarial patch decontamination.
We show that DiffPAD achieves state-of-the-art adversarial robustness against patch attacks and excels in recovering naturalistic images without patch remnants.
arXiv Detail & Related papers (2024-10-31T15:09:36Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - DPMesh: Exploiting Diffusion Prior for Occluded Human Mesh Recovery [71.6345505427213]
DPMesh is an innovative framework for occluded human mesh recovery.
It capitalizes on the profound diffusion prior about object structure and spatial relationships embedded in a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-04-01T18:59:13Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models [22.675911411028633]
We find that state-of-the-art deep neural network (DNN) models still hold their prediction even if we intentionally remove their robust features.
The NDD attack shows a significantly high capability to generate low-cost, model-agnostic, and transferable adversarial attacks.
arXiv Detail & Related papers (2023-08-30T01:21:11Z) - Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based
on Diffusion Model for Object Detector [18.021582628066554]
We propose a novel naturalistic adversarial patch generation method based on the diffusion models (DM)
We are the first to propose DM-based naturalistic adversarial patch generation for object detectors.
arXiv Detail & Related papers (2023-07-16T15:22:30Z) - Deceptive-NeRF/3DGS: Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction [60.52716381465063]
We introduce Deceptive-NeRF/3DGS to enhance sparse-view reconstruction with only a limited set of input images.
Specifically, we propose a deceptive diffusion model turning noisy images rendered from few-view reconstructions into high-quality pseudo-observations.
Our system progressively incorporates diffusion-generated pseudo-observations into the training image sets, ultimately densifying the sparse input observations by 5 to 10 times.
arXiv Detail & Related papers (2023-05-24T14:00:32Z) - Training Diffusion Models with Reinforcement Learning [82.29328477109826]
Diffusion models are trained with an approximation to the log-likelihood objective.
In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for downstream objectives.
We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms.
arXiv Detail & Related papers (2023-05-22T17:57:41Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.