Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling
Augmentation Framework
- URL: http://arxiv.org/abs/2305.03980v1
- Date: Sat, 6 May 2023 09:00:50 GMT
- Title: Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling
Augmentation Framework
- Authors: Ruijia Wu, Yuhang Wang, Huafeng Shi, Zhipeng Yu, Yichao Wu, Ding Liang
- Abstract summary: We propose the Adversarial Decoupling Augmentation Framework (ADAF) to enhance the defensive performance of facial privacy protection algorithms.
ADAF introduces multi-level text-related augmentations for defense stability against various attacker prompts.
- Score: 20.652130361862053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoising diffusion models have shown remarkable potential in various
generation tasks. The open-source large-scale text-to-image model, Stable
Diffusion, becomes prevalent as it can generate realistic artistic or facial
images with personalization through fine-tuning on a limited number of new
samples. However, this has raised privacy concerns as adversaries can acquire
facial images online and fine-tune text-to-image models for malicious editing,
leading to baseless scandals, defamation, and disruption to victims' lives.
Prior research efforts have focused on deriving adversarial loss from
conventional training processes for facial privacy protection through
adversarial perturbations. However, existing algorithms face two issues: 1)
they neglect the image-text fusion module, which is the vital module of
text-to-image diffusion models, and 2) their defensive performance is unstable
against different attacker prompts. In this paper, we propose the Adversarial
Decoupling Augmentation Framework (ADAF), addressing these issues by targeting
the image-text fusion module to enhance the defensive performance of facial
privacy protection algorithms. ADAF introduces multi-level text-related
augmentations for defense stability against various attacker prompts.
Concretely, considering the vision, text, and common unit space, we propose
Vision-Adversarial Loss, Prompt-Robust Augmentation, and Attention-Decoupling
Loss. Extensive experiments on CelebA-HQ and VGGFace2 demonstrate ADAF's
promising performance, surpassing existing algorithms.
Related papers
- AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models [20.37481116837779]
AdvI2I is a novel framework that manipulates input images to induce diffusion models to generate NSFW content.
By optimizing a generator to craft adversarial images, AdvI2I circumvents existing defense mechanisms.
We show that both AdvI2I and AdvI2I-Adaptive can effectively bypass current safeguards.
arXiv Detail & Related papers (2024-10-28T19:15:06Z) - DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [93.45507533317405]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.
We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.
We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Transferable Adversarial Facial Images for Privacy Protection [15.211743719312613]
We present a novel face privacy protection scheme with improved transferability while maintain high visual quality.
We first exploit global adversarial latent search to traverse the latent space of the generative model.
We then introduce a key landmark regularization module to preserve the visual identity information.
arXiv Detail & Related papers (2024-07-18T02:16:11Z) - Disrupting Diffusion-based Inpainters with Semantic Digression [20.10876695099345]
fabrication of visual misinformation on the web and social media has increased exponentially with the advent of text-to-image diffusion models.
Namely, Stable Diffusion inpainters allow the synthesis of maliciously inpainted images of personal and private figures, and copyrighted contents, also known as deepfakes.
To combat such generations, a disruption framework, namely Photoguard, has been proposed, where it adds adversarial noise to the context image to disrupt their inpainting synthesis.
While their framework suggested a diffusion-friendly approach, the disruption is not sufficiently strong and it requires a significant amount of GPU and time to immunize the context image
arXiv Detail & Related papers (2024-07-14T17:21:19Z) - Anonymization Prompt Learning for Facial Privacy-Preserving Text-to-Image Generation [56.46932751058042]
We train a learnable prompt prefix for text-to-image diffusion models, which forces the model to generate anonymized facial identities.
Experiments demonstrate the successful anonymization performance of APL, which anonymizes any specific individuals without compromising the quality of non-identity-specific image generation.
arXiv Detail & Related papers (2024-05-27T07:38:26Z) - DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection [60.73609509756533]
DiffAM is a novel approach to generate high-quality protected face images with adversarial makeup transferred from reference images.
Experiments demonstrate that DiffAM achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting.
arXiv Detail & Related papers (2024-05-16T08:05:36Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Protecting Facial Privacy: Generating Adversarial Identity Masks via
Style-robust Makeup Transfer [24.25863892897547]
adversarial makeup transfer GAN (AMT-GAN) is a novel face protection method aiming at constructing adversarial face images.
In this paper, we introduce a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.
arXiv Detail & Related papers (2022-03-07T03:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.