Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models
- URL: http://arxiv.org/abs/2504.18032v1
- Date: Fri, 25 Apr 2025 02:51:23 GMT
- Title: Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models
- Authors: Chen Chen, Daochang Liu, Mubarak Shah, Chang Xu,
- Abstract summary: We introduce PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring and semantic prompt search.<n>Our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
- Score: 62.979954692036685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-image diffusion models have demonstrated remarkable capabilities in creating images highly aligned with user prompts, yet their proclivity for memorizing training set images has sparked concerns about the originality of the generated images and privacy issues, potentially leading to legal complications for both model owners and users, particularly when the memorized images contain proprietary content. Although methods to mitigate these issues have been suggested, enhancing privacy often results in a significant decrease in the utility of the outputs, as indicated by text-alignment scores. To bridge the research gap, we introduce a novel method, PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring (PR) to improve privacy and incorporating semantic prompt search (SS) to enhance utility. Extensive experiments across various privacy levels demonstrate that our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
Related papers
- Privacy Protection Against Personalized Text-to-Image Synthesis via Cross-image Consistency Constraints [9.385284914809294]
Cross-image Anti-Personalization (CAP) is a novel framework that enhances resistance to personalization by enforcing style consistency across perturbed images.
We develop a dynamic ratio adjustment strategy that adaptively balances the impact of the consistency loss throughout the attack iterations.
arXiv Detail & Related papers (2025-04-17T08:39:32Z) - Harnessing Frequency Spectrum Insights for Image Copyright Protection Against Diffusion Models [26.821064889438777]
We present novel evidence that diffusion-generated images faithfully preserve the statistical properties of their training data.
We introduce emphCoprGuard, a robust frequency domain watermarking framework to safeguard against unauthorized image usage.
arXiv Detail & Related papers (2025-03-14T04:27:50Z) - CopyJudge: Automated Copyright Infringement Identification and Mitigation in Text-to-Image Diffusion Models [58.58208005178676]
We propose CopyJudge, an automated copyright infringement identification framework.<n>We employ an abstraction-filtration-comparison test framework with multi-LVLM debate to assess the likelihood of infringement.<n>Based on the judgments, we introduce a general LVLM-based mitigation strategy.
arXiv Detail & Related papers (2025-02-21T08:09:07Z) - Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending [54.26862913139299]
We introduce a novel framework Towards Effective user Attribution for latent diffusion models via Watermark-Informed Blending (TEAWIB)<n> TEAWIB incorporates a unique ready-to-use configuration approach that allows seamless integration of user-specific watermarks into generative models.<n>Experiments validate the effectiveness of TEAWIB, showcasing the state-of-the-art performance in perceptual quality and attribution accuracy.
arXiv Detail & Related papers (2024-09-17T07:52:09Z) - Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models [27.83772742404565]
We introduce a Prompt-Agnostic Adversarial Perturbation (PAP) method for customized diffusion models.
PAP first models the prompt distribution using a Laplace Approximation, and then produces prompt-agnostic perturbations by maximizing a disturbance expectation.
This approach effectively tackles the prompt-agnostic attacks, leading to improved defense stability.
arXiv Detail & Related papers (2024-08-20T06:17:56Z) - LCM-Lookahead for Encoder-based Text-to-Image Personalization [82.56471486184252]
We explore the potential of using shortcut-mechanisms to guide the personalization of text-to-image models.
We focus on encoder-based personalization approaches, and demonstrate that by tuning them with a lookahead identity loss, we can achieve higher identity fidelity.
arXiv Detail & Related papers (2024-04-04T17:43:06Z) - Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention [62.671435607043875]
Research indicates that text-to-image diffusion models replicate images from their training data, raising tremendous concerns about potential copyright infringement and privacy risks.
We reveal that during memorization, the cross-attention tends to focus disproportionately on the embeddings of specific tokens.
We introduce an innovative approach to detect and mitigate memorization in diffusion models.
arXiv Detail & Related papers (2024-03-17T01:27:00Z) - Privacy-Preserving Diffusion Model Using Homomorphic Encryption [5.282062491549009]
We introduce a privacy-preserving stable diffusion framework leveraging homomorphic encryption, called HE-Diffusion.
We propose a novel min-distortion method that enables efficient partial image encryption.
We successfully implement HE-based privacy-preserving stable diffusion inference.
arXiv Detail & Related papers (2024-03-09T04:56:57Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.