Privacy Protection Against Personalized Text-to-Image Synthesis via Cross-image Consistency Constraints
- URL: http://arxiv.org/abs/2504.12747v1
- Date: Thu, 17 Apr 2025 08:39:32 GMT
- Title: Privacy Protection Against Personalized Text-to-Image Synthesis via Cross-image Consistency Constraints
- Authors: Guanyu Wang, Kailong Wang, Yihao Huang, Mingyi Zhou, Zhang Qing cnwatcher, Geguang Pu, Li Li,
- Abstract summary: Cross-image Anti-Personalization (CAP) is a novel framework that enhances resistance to personalization by enforcing style consistency across perturbed images.<n>We develop a dynamic ratio adjustment strategy that adaptively balances the impact of the consistency loss throughout the attack iterations.
- Score: 9.385284914809294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement of diffusion models and personalization techniques has made it possible to recreate individual portraits from just a few publicly available images. While such capabilities empower various creative applications, they also introduce serious privacy concerns, as adversaries can exploit them to generate highly realistic impersonations. To counter these threats, anti-personalization methods have been proposed, which add adversarial perturbations to published images to disrupt the training of personalization models. However, existing approaches largely overlook the intrinsic multi-image nature of personalization and instead adopt a naive strategy of applying perturbations independently, as commonly done in single-image settings. This neglects the opportunity to leverage inter-image relationships for stronger privacy protection. Therefore, we advocate for a group-level perspective on privacy protection against personalization. Specifically, we introduce Cross-image Anti-Personalization (CAP), a novel framework that enhances resistance to personalization by enforcing style consistency across perturbed images. Furthermore, we develop a dynamic ratio adjustment strategy that adaptively balances the impact of the consistency loss throughout the attack iterations. Extensive experiments on the classical CelebHQ and VGGFace2 benchmarks show that CAP substantially improves existing methods.
Related papers
- Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models [62.979954692036685]
We introduce PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring and semantic prompt search.
Our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
arXiv Detail & Related papers (2025-04-25T02:51:23Z) - Personalize Anything for Free with Diffusion Transformer [20.385520869825413]
Recent training-free approaches struggle with identity preservation, applicability, and compatibility with diffusion transformers (DiTs)<n>We uncover the untapped potential of DiT, where simply replacing denoising tokens with those of a reference subject achieves zero-shot subject reconstruction.<n>We propose textbfPersonalize Anything, a training-free framework that achieves personalized image generation in DiT through: 1) timestep-adaptive token replacement that enforces subject consistency via early-stage injection and enhances flexibility through late-stage regularization, and 2) patch perturbation strategies to boost structural diversity.
arXiv Detail & Related papers (2025-03-16T17:51:16Z) - Enhancing Facial Privacy Protection via Weakening Diffusion Purification [36.33027625681024]
Social media has led to the widespread sharing of individual portrait images, which pose serious privacy risks.<n>Recent methods employ diffusion models to generate adversarial face images for privacy protection.<n>We propose learning unconditional embeddings to increase the learning capacity for adversarial modifications.<n>We integrate an identity-preserving structure to maintain structural consistency between the original and generated images.
arXiv Detail & Related papers (2025-03-13T13:27:53Z) - PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models [51.458089902581456]
We introduce PersGuard, a novel backdoor-based approach that prevents malicious personalization of specific images.<n>Our method significantly outperforms existing techniques, offering a more robust solution for privacy and copyright protection.
arXiv Detail & Related papers (2025-02-22T09:47:55Z) - ID-Cloak: Crafting Identity-Specific Cloaks Against Personalized Text-to-Image Generation [54.14901999875917]
We investigate the creation of identity-specific cloaks that safeguard all images belong to a specific identity.<n>We craft identity-specific cloaks with the proposed novel objective that encourages the cloak to guide the model away from its normal output.<n>Our method, along with the proposed identity-specific cloak setting, marks a notable advance in realistic privacy protection.
arXiv Detail & Related papers (2025-02-12T03:52:36Z) - Visual-Friendly Concept Protection via Selective Adversarial Perturbations [23.780603071185197]
We propose the Visual-Friendly Concept Protection (VCPro) framework, which prioritizes the protection of key concepts chosen by the image owner.
To ensure these perturbations are as inconspicuous as possible, we introduce a relaxed optimization objective.
Experiments validate that VCPro achieves a better trade-off between the visibility of perturbations and protection effectiveness.
arXiv Detail & Related papers (2024-08-16T04:14:28Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling
Augmentation Framework [20.652130361862053]
We propose the Adversarial Decoupling Augmentation Framework (ADAF) to enhance the defensive performance of facial privacy protection algorithms.
ADAF introduces multi-level text-related augmentations for defense stability against various attacker prompts.
arXiv Detail & Related papers (2023-05-06T09:00:50Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.