ProMark: Proactive Diffusion Watermarking for Causal Attribution
- URL: http://arxiv.org/abs/2403.09914v1
- Date: Thu, 14 Mar 2024 23:16:43 GMT
- Title: ProMark: Proactive Diffusion Watermarking for Causal Attribution
- Authors: Vishal Asnani, John Collomosse, Tu Bui, Xiaoming Liu, Shruti Agarwal,
- Abstract summary: We propose ProMark, a causal attribution technique to attribute a synthetically generated image to its training data concepts.
The concept information is proactively embedded into the input training images using imperceptible watermarks.
We show that we can embed as many as $216$ unique watermarks into the training data, and each training image can contain more than one watermark.
- Score: 25.773438257321793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI (GenAI) is transforming creative workflows through the capability to synthesize and manipulate images via high-level prompts. Yet creatives are not well supported to receive recognition or reward for the use of their content in GenAI training. To this end, we propose ProMark, a causal attribution technique to attribute a synthetically generated image to its training data concepts like objects, motifs, templates, artists, or styles. The concept information is proactively embedded into the input training images using imperceptible watermarks, and the diffusion models (unconditional or conditional) are trained to retain the corresponding watermarks in generated images. We show that we can embed as many as $2^{16}$ unique watermarks into the training data, and each training image can contain more than one watermark. ProMark can maintain image quality whilst outperforming correlation-based attribution. Finally, several qualitative examples are presented, providing the confidence that the presence of the watermark conveys a causative relationship between training data and synthetic images.
Related papers
- LaWa: Using Latent Space for In-Generation Image Watermarking [11.089926858383476]
Imperceptible image watermarking is one viable solution towards such concerns.
LaWa is an in-generation image watermarking method designed for LDMs.
We show that LaWa can also be used as a general image watermarking method.
arXiv Detail & Related papers (2024-08-11T22:03:45Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - A Training-Free Plug-and-Play Watermark Framework for Stable Diffusion [47.97443554073836]
Existing approaches involve training components or entire SDs to embed a watermark in generated images for traceability and responsibility attribution.
In the era of AI-generated content (AIGC), the rapid iteration of SDs renders retraining with watermark models costly.
We propose a training-free plug-and-play watermark framework for SDs.
arXiv Detail & Related papers (2024-04-08T15:29:46Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - Catch You Everything Everywhere: Guarding Textual Inversion via Concept Watermarking [67.60174799881597]
We propose the novel concept watermarking, where watermark information is embedded into the target concept and then extracted from generated images based on the watermarked concept.
In practice, the concept owner can upload his concept with different watermarks (ie, serial numbers) to the platform, and the platform allocates different users with different serial numbers for subsequent tracing and forensics.
arXiv Detail & Related papers (2023-09-12T03:33:13Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z) - Robust Watermarking using Diffusion of Logo into Autoencoder Feature
Maps [10.072876983072113]
In this paper, we propose to use an end-to-end network for watermarking.
We use a convolutional neural network (CNN) to control the embedding strength based on the image content.
Different image processing attacks are simulated as a network layer to improve the robustness of the model.
arXiv Detail & Related papers (2021-05-24T05:18:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.