Adversarial Example Does Good: Preventing Painting Imitation from
Diffusion Models via Adversarial Examples
- URL: http://arxiv.org/abs/2302.04578v2
- Date: Tue, 6 Jun 2023 06:34:46 GMT
- Title: Adversarial Example Does Good: Preventing Painting Imitation from
Diffusion Models via Adversarial Examples
- Authors: Chumeng Liang, Xiaoyu Wu, Yang Hua, Jiaru Zhang, Yiming Xue, Tao Song,
Zhengui Xue, Ruhui Ma, Haibing Guan
- Abstract summary: Diffusion Models (DMs) boost a wave in AI for Art yet raise new copyright concerns.
In this paper, we propose to utilize adversarial examples for DMs to protect human-created artworks.
Our method can be a powerful tool for human artists to protect their copyright against infringers equipped with DM-based AI-for-Art applications.
- Score: 32.701307512642835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Diffusion Models (DMs) boost a wave in AI for Art yet raise new
copyright concerns, where infringers benefit from using unauthorized paintings
to train DMs to generate novel paintings in a similar style. To address these
emerging copyright violations, in this paper, we are the first to explore and
propose to utilize adversarial examples for DMs to protect human-created
artworks. Specifically, we first build a theoretical framework to define and
evaluate the adversarial examples for DMs. Then, based on this framework, we
design a novel algorithm, named AdvDM, which exploits a Monte-Carlo estimation
of adversarial examples for DMs by optimizing upon different latent variables
sampled from the reverse process of DMs. Extensive experiments show that the
generated adversarial examples can effectively hinder DMs from extracting their
features. Therefore, our method can be a powerful tool for human artists to
protect their copyright against infringers equipped with DM-based AI-for-Art
applications. The code of our method is available on GitHub:
https://github.com/mist-project/mist.git.
Related papers
- Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models [58.065255696601604]
We use compositional property of diffusion models, which allows to leverage multiple prompts in a single image generation.
We argue that it is essential to consider all possible approaches to image generation with diffusion models that can be employed by an adversary.
arXiv Detail & Related papers (2024-04-21T16:35:16Z) - Watermark-embedded Adversarial Examples for Copyright Protection against Diffusion Models [10.993094140231667]
There are concerns that Diffusion Models could be used to imitate unauthorized creations and thus raise copyright issues.
We propose a novel framework that embeds personal watermarks in the generation of adversarial examples.
This work provides a simple yet powerful way to protect copyright from DM-based imitation.
arXiv Detail & Related papers (2024-04-15T01:27:07Z) - The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline [30.80691226540351]
We formalized the Copyright Infringement Attack on generative AI models and proposed a backdoor attack method, SilentBadDiffusion.
Our method strategically embeds connections between pieces of copyrighted information and text references in poisoning data.
Our experiments show the stealth and efficacy of the poisoning data.
arXiv Detail & Related papers (2024-01-07T08:37:29Z) - VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion
Models [69.20464255450788]
Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising.
Recent studies have shown that basic unconditional DMs are vulnerable to backdoor injection.
This paper presents a unified backdoor attack framework to expand the current scope of backdoor analysis for DMs.
arXiv Detail & Related papers (2023-06-12T05:14:13Z) - Mist: Towards Improved Adversarial Examples for Diffusion Models [0.8883733362171035]
Diffusion Models (DMs) have empowered great success in artificial-intelligence-generated content, especially in artwork creation.
infringers can make profits by imitating non-authorized human-created paintings with DMs.
Recent researches suggest that various adversarial examples for diffusion models can be effective tools against these copyright infringements.
arXiv Detail & Related papers (2023-05-22T03:43:34Z) - A Recipe for Watermarking Diffusion Models [53.456012264767914]
Diffusion models (DMs) have demonstrated advantageous potential on generative tasks.
Widespread interest exists in incorporating DMs into downstream applications, such as producing or editing photorealistic images.
However, practical deployment and unprecedented power of DMs raise legal issues, including copyright protection and monitoring of generated content.
Watermarking has been a proven solution for copyright protection and content monitoring, but it is underexplored in the DMs literature.
arXiv Detail & Related papers (2023-03-17T17:25:10Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.