Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack
- URL: http://arxiv.org/abs/2411.16437v1
- Date: Mon, 25 Nov 2024 14:39:18 GMT
- Title: Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack
- Authors: Xide Xu, Muhammad Atif Butt, Sandesh Kamath, Bogdan Raducanu,
- Abstract summary: We propose a novel and efficient adversarial attack method, Concept Protection by Selective Attention Manipulation (CoPSAM)
For this purpose, we carefully construct an imperceptible noise to be added to clean samples to get their adversarial counterparts.
Experimental validation on a subset of CelebA-HQ face images dataset demonstrates that our approach outperforms existing methods.
- Score: 5.357486699062561
- License:
- Abstract: The growing demand for customized visual content has led to the rise of personalized text-to-image (T2I) diffusion models. Despite their remarkable potential, they pose significant privacy risk when misused for malicious purposes. In this paper, we propose a novel and efficient adversarial attack method, Concept Protection by Selective Attention Manipulation (CoPSAM) which targets only the cross-attention layers of a T2I diffusion model. For this purpose, we carefully construct an imperceptible noise to be added to clean samples to get their adversarial counterparts. This is obtained during the fine-tuning process by maximizing the discrepancy between the corresponding cross-attention maps of the user-specific token and the class-specific token, respectively. Experimental validation on a subset of CelebA-HQ face images dataset demonstrates that our approach outperforms existing methods. Besides this, our method presents two important advantages derived from the qualitative evaluation: (i) we obtain better protection results for lower noise levels than our competitors; and (ii) we protect the content from unauthorized use thereby protecting the individual's identity from potential misuse.
Related papers
- DDAP: Dual-Domain Anti-Personalization against Text-to-Image Diffusion Models [18.938687631109925]
Diffusion-based personalized visual content generation technologies have achieved significant breakthroughs.
However, when misused to fabricate fake news or unsettling content targeting individuals, these technologies could cause considerable societal harm.
This paper introduces a novel Dual-Domain Anti-Personalization framework (DDAP)
By alternating between these two methods, we construct the DDAP framework, effectively harnessing the strengths of both domains.
arXiv Detail & Related papers (2024-07-29T16:11:21Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based Customization [19.635385099376066]
malicious users have misused diffusion-based customization methods like DreamBooth to create fake images.
In this paper, we propose DisDiff, a novel adversarial attack method to disrupt the diffusion model outputs.
arXiv Detail & Related papers (2024-05-31T02:45:31Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models [16.505593270720034]
We propose an adaptive greedy search for optimal time steps that seamlessly integrates with existing anti-customization methods.
Our approach significantly increases identity disruption, thereby protecting user privacy and copyright.
arXiv Detail & Related papers (2023-12-13T03:04:22Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation [25.55296442023984]
We propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation.
This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content.
arXiv Detail & Related papers (2023-06-02T20:19:19Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.