Pixel Is Not A Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models
- URL: http://arxiv.org/abs/2408.11810v1
- Date: Wed, 21 Aug 2024 17:56:34 GMT
- Title: Pixel Is Not A Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models
- Authors: Chun-Yen Shih, Li-Xuan Peng, Jia-Wei Liao, Ernie Chu, Cheng-Fu Chou, Jun-Cheng Chen,
- Abstract summary: Diffusion Models have emerged as powerful generative models for high-quality image synthesis, with many subsequent image editing techniques based on them.
Previous works have attempted to safeguard images from diffusion-based editing by adding imperceptible perturbations.
Our work proposes a novel attacking framework with a feature representation attack loss that exploits vulnerabilities in denoising UNets and a latent optimization strategy to enhance the naturalness of protected images.
- Score: 9.905296922309157
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Diffusion Models have emerged as powerful generative models for high-quality image synthesis, with many subsequent image editing techniques based on them. However, the ease of text-based image editing introduces significant risks, such as malicious editing for scams or intellectual property infringement. Previous works have attempted to safeguard images from diffusion-based editing by adding imperceptible perturbations. These methods are costly and specifically target prevalent Latent Diffusion Models (LDMs), while Pixel-domain Diffusion Models (PDMs) remain largely unexplored and robust against such attacks. Our work addresses this gap by proposing a novel attacking framework with a feature representation attack loss that exploits vulnerabilities in denoising UNets and a latent optimization strategy to enhance the naturalness of protected images. Extensive experiments demonstrate the effectiveness of our approach in attacking dominant PDM-based editing methods (e.g., SDEdit) while maintaining reasonable protection fidelity and robustness against common defense methods. Additionally, our framework is extensible to LDMs, achieving comparable performance to existing approaches.
Related papers
- AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption [25.06674328160838]
Malicious adversaries exploit diffusion models for inpainting tasks, such as replacing a specific region with a celebrity.
We propose ADVPAINT, a novel framework that generates adversarial perturbations that effectively disrupt the adversary's inpainting tasks.
Our experimental results demonstrate that ADVPAINT's perturbations are highly effective in disrupting the adversary's inpainting tasks, outperforming existing methods.
arXiv Detail & Related papers (2025-03-13T06:05:40Z) - Anti-Diffusion: Preventing Abuse of Modifications of Diffusion-Based Models [48.15314463057229]
diffusion-based techniques can lead to severe negative social impacts.
Some works have been proposed to provide defense against the abuse of diffusion-based methods.
We propose Anti-Diffusion, a privacy protection system applicable to both tuning and editing techniques.
arXiv Detail & Related papers (2025-03-07T17:23:52Z) - DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination [5.7254228484416325]
DiffPAD is a novel framework that harnesses the power of diffusion models for adversarial patch decontamination.
We show that DiffPAD achieves state-of-the-art adversarial robustness against patch attacks and excels in recovering naturalistic images without patch remnants.
arXiv Detail & Related papers (2024-10-31T15:09:36Z) - DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [93.45507533317405]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.
We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.
We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - DDAP: Dual-Domain Anti-Personalization against Text-to-Image Diffusion Models [18.938687631109925]
Diffusion-based personalized visual content generation technologies have achieved significant breakthroughs.
However, when misused to fabricate fake news or unsettling content targeting individuals, these technologies could cause considerable societal harm.
This paper introduces a novel Dual-Domain Anti-Personalization framework (DDAP)
By alternating between these two methods, we construct the DDAP framework, effectively harnessing the strengths of both domains.
arXiv Detail & Related papers (2024-07-29T16:11:21Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits [76.25962336540226]
JIGMARK is a first-of-its-kind watermarking technique that enhances robustness through contrastive learning.
Our evaluation reveals that JIGMARK significantly surpasses existing watermarking solutions in resilience to diffusion-model edits.
arXiv Detail & Related papers (2024-06-06T03:31:41Z) - Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models [11.91784429717735]
We propose CAAT, a generic and efficient approach to fool latent diffusion models (LDMs)
We show that a subtle gradient on an image can significantly impact the cross-attention layers, thus changing the mapping between text and image.
Experiments demonstrate that CAAT is compatible with diverse diffusion models and outperforms baseline attack methods.
arXiv Detail & Related papers (2024-04-23T14:31:15Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Toward effective protection against diffusion based mimicry through
score distillation [15.95715097030366]
Efforts have been made to add perturbations to protect images from diffusion-based mimicry pipelines.
Most of the existing methods are too ineffective and even impractical to be used by individual users.
We present novel findings on attacking latent diffusion models and propose new plug-and-play strategies for more effective protection.
arXiv Detail & Related papers (2023-10-02T18:56:12Z) - Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation [25.55296442023984]
We propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation.
This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content.
arXiv Detail & Related papers (2023-06-02T20:19:19Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - JPEG Compressed Images Can Bypass Protections Against AI Editing [48.340067730457584]
Imperceptible perturbations have been proposed as a means of protecting images from malicious editing.
We find that the aforementioned perturbations are not robust to JPEG compression.
arXiv Detail & Related papers (2023-04-05T05:30:09Z) - Raising the Cost of Malicious AI-Powered Image Editing [82.71990330465115]
We present an approach to mitigating the risks of malicious image editing posed by large diffusion models.
The key idea is to immunize images so as to make them resistant to manipulation by these models.
arXiv Detail & Related papers (2023-02-13T18:38:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.