Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model
- URL: http://arxiv.org/abs/2312.11285v2
- Date: Thu, 28 Dec 2023 07:58:00 GMT
- Title: Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model
- Authors: Decheng Liu, Xijun Wang, Chunlei Peng, Nannan Wang, Ruiming Hu, Xinbo
Gao
- Abstract summary: We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
- Score: 61.53213964333474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks involve adding perturbations to the source image to cause
misclassification by the target model, which demonstrates the potential of
attacking face recognition models. Existing adversarial face image generation
methods still can't achieve satisfactory performance because of low
transferability and high detectability. In this paper, we propose a unified
framework Adv-Diffusion that can generate imperceptible adversarial identity
perturbations in the latent space but not the raw pixel space, which utilizes
strong inpainting capabilities of the latent diffusion model to generate
realistic adversarial images. Specifically, we propose the identity-sensitive
conditioned diffusion generative model to generate semantic perturbations in
the surroundings. The designed adaptive strength-based adversarial perturbation
algorithm can ensure both attack transferability and stealthiness. Extensive
qualitative and quantitative experiments on the public FFHQ and CelebA-HQ
datasets prove the proposed method achieves superior performance compared with
the state-of-the-art methods without an extra generative model training
process. The source code is available at
https://github.com/kopper-xdu/Adv-Diffusion.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Revealing Vulnerabilities in Stable Diffusion via Targeted Attacks [41.531913152661296]
We formulate the problem of targeted adversarial attack on Stable Diffusion and propose a framework to generate adversarial prompts.
Specifically, we design a gradient-based embedding optimization method to craft reliable adversarial prompts that guide stable diffusion to generate specific images.
After obtaining successful adversarial prompts, we reveal the mechanisms that cause the vulnerability of the model.
arXiv Detail & Related papers (2024-01-16T12:15:39Z) - Exposing the Fake: Effective Diffusion-Generated Images Detection [14.646957596560076]
This paper proposes a novel detection method called Stepwise Error for Diffusion-generated Image Detection (SeDID)
SeDID exploits the unique attributes of diffusion models, namely deterministic reverse and deterministic denoising errors.
Our work makes a pivotal contribution to distinguishing diffusion model-generated images, marking a significant step in the domain of artificial intelligence security.
arXiv Detail & Related papers (2023-07-12T16:16:37Z) - Diffusion Models for Imperceptible and Transferable Adversarial Attack [23.991194050494396]
We propose a novel imperceptible and transferable attack by leveraging both the generative and discriminative power of diffusion models.
Our proposed method, DiffAttack, is the first that introduces diffusion models into the adversarial attack field.
arXiv Detail & Related papers (2023-05-14T16:02:36Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Towards Understanding and Boosting Adversarial Transferability from a
Distribution Perspective [80.02256726279451]
adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years.
We propose a novel method that crafts adversarial examples by manipulating the distribution of the image.
Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios.
arXiv Detail & Related papers (2022-10-09T09:58:51Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.