BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing
- URL: http://arxiv.org/abs/2511.00143v1
- Date: Fri, 31 Oct 2025 17:54:28 GMT
- Title: BlurGuard: A Simple Approach for Robustifying Image Protection Against AI-Powered Editing
- Authors: Jinsu Kim, Yunhun Nam, Minseon Kim, Sangpil Kim, Jongheon Jeong,
- Abstract summary: An emerging line of research focuses on implanting "protective" adversarial noise into images before their public release.<n>We propose a surprisingly simple method to enhance the robustness of image protection methods against noise reversal techniques.
- Score: 25.580397148737685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in text-to-image models have increased the exposure of powerful image editing techniques as a tool, raising concerns about their potential for malicious use. An emerging line of research to address such threats focuses on implanting "protective" adversarial noise into images before their public release, so future attempts to edit them using text-to-image models can be impeded. However, subsequent works have shown that these adversarial noises are often easily "reversed," e.g., with techniques as simple as JPEG compression, casting doubt on the practicality of the approach. In this paper, we argue that adversarial noise for image protection should not only be imperceptible, as has been a primary focus of prior work, but also irreversible, viz., it should be difficult to detect as noise provided that the original image is hidden. We propose a surprisingly simple method to enhance the robustness of image protection methods against noise reversal techniques. Specifically, it applies an adaptive per-region Gaussian blur on the noise to adjust the overall frequency spectrum. Through extensive experiments, we show that our method consistently improves the per-sample worst-case protection performance of existing methods against a wide range of reversal techniques on diverse image editing scenarios, while also reducing quality degradation due to noise in terms of perceptual metrics. Code is available at https://github.com/jsu-kim/BlurGuard.
Related papers
- Active Adversarial Noise Suppression for Image Forgery Localization [56.98050814363447]
We introduce an Adversarial Noise Suppression Module (ANSM) that generate a defensive perturbation to suppress the attack effect of adversarial noise.<n>To our best knowledge, this is the first report of adversarial defense in image forgery localization tasks.
arXiv Detail & Related papers (2025-06-15T14:53:27Z) - DCT-Shield: A Robust Frequency Domain Defense against Malicious Image Editing [1.7624347338410742]
Recent defenses attempt to protect images by adding a limited noise in the pixel space to disrupt the functioning of diffusion-based editing models.<n>We propose a novel optimization approach that introduces adversarial perturbations directly in the frequency domain.<n>By leveraging the JPEG pipeline, our method generates adversarial images that effectively prevent malicious image editing.
arXiv Detail & Related papers (2025-04-24T19:14:50Z) - Divide and Conquer: Heterogeneous Noise Integration for Diffusion-based Adversarial Purification [75.09791002021947]
Existing purification methods aim to disrupt adversarial perturbations by introducing a certain amount of noise through a forward diffusion process, followed by a reverse process to recover clean examples.<n>This approach is fundamentally flawed as the uniform operation of the forward process compromises normal pixels while attempting to combat adversarial perturbations.<n>We propose a heterogeneous purification strategy grounded in the interpretability of neural networks.<n>Our method decisively applies higher-intensity noise to specific pixels that the target model focuses on while the remaining pixels are subjected to only low-intensity noise.
arXiv Detail & Related papers (2025-03-03T11:00:25Z) - Anti-Reference: Universal and Immediate Defense Against Reference-Based Generation [24.381813317728195]
Anti-Reference is a novel method that protects images from the threats posed by reference-based generation techniques.<n>We propose a unified loss function that enables joint attacks on fine-tuning-based customization methods.<n>Our method shows certain transfer attack capabilities, effectively challenging both gray-box models and some commercial APIs.
arXiv Detail & Related papers (2024-12-08T16:04:45Z) - DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [103.40147707280585]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.<n>We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.<n>We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - Improving Adversarial Robustness of Masked Autoencoders via Test-time
Frequency-domain Prompting [133.55037976429088]
We investigate the adversarial robustness of vision transformers equipped with BERT pretraining (e.g., BEiT, MAE)
A surprising observation is that MAE has significantly worse adversarial robustness than other BERT pretraining methods.
We propose a simple yet effective way to boost the adversarial robustness of MAE.
arXiv Detail & Related papers (2023-08-20T16:27:17Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Image Denoising Using the Geodesics' Gramian of the Manifold Underlying Patch-Space [1.7767466724342067]
We propose a novel and computationally efficient image denoising method that is capable of producing accurate images.
To preserve image smoothness, this method inputs patches partitioned from the image rather than pixels.
We validate the performance of this method against benchmark image processing methods.
arXiv Detail & Related papers (2020-10-14T04:07:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.