Imperceptible Protection against Style Imitation from Diffusion Models
- URL: http://arxiv.org/abs/2403.19254v1
- Date: Thu, 28 Mar 2024 09:21:00 GMT
- Title: Imperceptible Protection against Style Imitation from Diffusion Models
- Authors: Namhyuk Ahn, Wonhyuk Ahn, KiYoon Yoo, Daesik Kim, Seung-Hun Nam,
- Abstract summary: We create a perceptual map to identify areas most sensitive to human eyes.
We then adjust the protection intensity guided by an instance-aware refinement.
Results show that our method substantially elevates the quality of the protected image without compromising on protection efficacy.
- Score: 9.548195579003897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress in diffusion models has profoundly enhanced the fidelity of image generation. However, this has raised concerns about copyright infringements. While prior methods have introduced adversarial perturbations to prevent style imitation, most are accompanied by the degradation of artworks' visual quality. Recognizing the importance of maintaining this, we develop a visually improved protection method that preserves its protection capability. To this end, we create a perceptual map to identify areas most sensitive to human eyes. We then adjust the protection intensity guided by an instance-aware refinement. We also integrate a perceptual constraints bank to further improve the imperceptibility. Results show that our method substantially elevates the quality of the protected image without compromising on protection efficacy.
Related papers
- Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI [61.35083814817094]
Several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online.
We find that low-effort and "off-the-shelf" techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections.
We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI.
arXiv Detail & Related papers (2024-06-17T18:51:45Z) - Diffusion-Based Hierarchical Image Steganography [60.69791384893602]
Hierarchical Image Steganography is a novel method that enhances the security and capacity of embedding multiple images into a single container.
It exploits the robustness of the Diffusion Model alongside the reversibility of the Flow Model.
The innovative structure can autonomously generate a container image, thereby securely and efficiently concealing multiple images and text.
arXiv Detail & Related papers (2024-05-19T11:29:52Z) - Adversarial Purification and Fine-tuning for Robust UDC Image
Restoration [41.64534231708787]
Under-Display Camera (UDC) technology faces unique image degradation challenges exacerbated by the susceptibility to adversarial perturbations.
This study focuses on the enhancement of Under-Display Camera (UDC) image restoration models, focusing on their robustness against adversarial attacks.
arXiv Detail & Related papers (2024-02-21T09:06:04Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models [16.505593270720034]
We propose an adaptive greedy search for optimal time steps that seamlessly integrates with existing anti-customization methods.
Our approach significantly increases identity disruption, thereby protecting user privacy and copyright.
arXiv Detail & Related papers (2023-12-13T03:04:22Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation [44.2692621807947]
We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
arXiv Detail & Related papers (2022-06-17T09:02:12Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.