JPEG Compressed Images Can Bypass Protections Against AI Editing
- URL: http://arxiv.org/abs/2304.02234v2
- Date: Fri, 7 Apr 2023 20:33:57 GMT
- Title: JPEG Compressed Images Can Bypass Protections Against AI Editing
- Authors: Pedro Sandoval-Segura, Jonas Geiping, Tom Goldstein
- Abstract summary: Imperceptible perturbations have been proposed as a means of protecting images from malicious editing.
We find that the aforementioned perturbations are not robust to JPEG compression.
- Score: 48.340067730457584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently developed text-to-image diffusion models make it easy to edit or
create high-quality images. Their ease of use has raised concerns about the
potential for malicious editing or deepfake creation. Imperceptible
perturbations have been proposed as a means of protecting images from malicious
editing by preventing diffusion models from generating realistic images.
However, we find that the aforementioned perturbations are not robust to JPEG
compression, which poses a major weakness because of the common usage and
availability of JPEG. We discuss the importance of robustness for additive
imperceptible perturbations and encourage alternative approaches to protect
images against editing.
Related papers
- Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending [54.26862913139299]
We introduce a novel framework Towards Effective user Attribution for latent diffusion models via Watermark-Informed Blending (TEAWIB)
TEAWIB incorporates a unique ready-to-use configuration approach that allows seamless integration of user-specific watermarks into generative models.
Experiments validate the effectiveness of TEAWIB, showcasing the state-of-the-art performance in perceptual quality and attribution accuracy.
arXiv Detail & Related papers (2024-09-17T07:52:09Z) - Pixel Is Not A Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models [9.905296922309157]
Diffusion Models have emerged as powerful generative models for high-quality image synthesis, with many subsequent image editing techniques based on them.
Previous works have attempted to safeguard images from diffusion-based editing by adding imperceptible perturbations.
Our work proposes a novel attacking framework with a feature representation attack loss that exploits vulnerabilities in denoising UNets and a latent optimization strategy to enhance the naturalness of protected images.
arXiv Detail & Related papers (2024-08-21T17:56:34Z) - JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits [76.25962336540226]
JIGMARK is a first-of-its-kind watermarking technique that enhances robustness through contrastive learning.
Our evaluation reveals that JIGMARK significantly surpasses existing watermarking solutions in resilience to diffusion-model edits.
arXiv Detail & Related papers (2024-06-06T03:31:41Z) - EditGuard: Versatile Image Watermarking for Tamper Localization and
Copyright Protection [19.140822655858873]
We propose a proactive forensics framework EditGuard to unify copyright protection and tamper-agnostic localization.
It can offer a meticulous embedding of imperceptible watermarks and precise decoding of tampered areas and copyright information.
Our experiments demonstrate that EditGuard balances the tamper localization accuracy, copyright recovery precision, and generalizability to various AIGC-based tampering methods.
arXiv Detail & Related papers (2023-12-12T15:41:24Z) - EditShield: Protecting Unauthorized Image Editing by Instruction-guided Diffusion Models [26.846110318670934]
We propose a protection method EditShield against unauthorized modifications from text-to-image diffusion models.
Specifically, EditShield works by adding imperceptible perturbations that can shift the latent representation used in the diffusion process.
Our experiments demonstrate EditShield's effectiveness among synthetic and real-world datasets.
arXiv Detail & Related papers (2023-11-19T06:00:56Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.