ProMark: Proactive Diffusion Watermarking for Causal Attribution
- URL: http://arxiv.org/abs/2403.09914v1
- Date: Thu, 14 Mar 2024 23:16:43 GMT
- Title: ProMark: Proactive Diffusion Watermarking for Causal Attribution
- Authors: Vishal Asnani, John Collomosse, Tu Bui, Xiaoming Liu, Shruti Agarwal,
- Abstract summary: We propose ProMark, a causal attribution technique to attribute a synthetically generated image to its training data concepts.
The concept information is proactively embedded into the input training images using imperceptible watermarks.
We show that we can embed as many as $216$ unique watermarks into the training data, and each training image can contain more than one watermark.
- Score: 25.773438257321793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI (GenAI) is transforming creative workflows through the capability to synthesize and manipulate images via high-level prompts. Yet creatives are not well supported to receive recognition or reward for the use of their content in GenAI training. To this end, we propose ProMark, a causal attribution technique to attribute a synthetically generated image to its training data concepts like objects, motifs, templates, artists, or styles. The concept information is proactively embedded into the input training images using imperceptible watermarks, and the diffusion models (unconditional or conditional) are trained to retain the corresponding watermarks in generated images. We show that we can embed as many as $2^{16}$ unique watermarks into the training data, and each training image can contain more than one watermark. ProMark can maintain image quality whilst outperforming correlation-based attribution. Finally, several qualitative examples are presented, providing the confidence that the presence of the watermark conveys a causative relationship between training data and synthetic images.
Related papers
- Dynamic watermarks in images generated by diffusion models [46.1135899490656]
High-fidelity text-to-image diffusion models have revolutionized visual content generation, but their widespread use raises significant ethical concerns.
We propose a novel multi-stage watermarking framework for diffusion models, designed to establish copyright and trace generated images back to their source.
Our work advances the field of AI-generated content security by providing a scalable solution for model ownership verification and misuse prevention.
arXiv Detail & Related papers (2025-02-13T03:23:17Z) - Image Watermarking of Generative Diffusion Models [42.982489491857145]
We propose a watermarking technique that embeds watermark features into the diffusion model itself.
Our technique enables training of a paired watermark extractor for a generative model that is learned through an end-to-end process.
We demonstrate highly accurate watermark embedding/detection and show that it is also possible to distinguish between different watermarks embedded with our method to differentiate between generative models.
arXiv Detail & Related papers (2025-02-12T09:00:48Z) - Spread them Apart: Towards Robust Watermarking of Generated Content [4.332441337407564]
We propose an approach to embed watermarks into the generated content to allow future detection of the generated content and identification of the user who generated it.
We prove that watermarks embedded are guaranteed to be robust against additive perturbations of a bounded magnitude.
arXiv Detail & Related papers (2025-02-11T09:23:38Z) - On the Coexistence and Ensembling of Watermarks [93.15379331904602]
We find that various open-source watermarks can coexist with only minor impacts on image quality and decoding robustness.
We show how ensembling can increase the overall message capacity and enable new trade-offs between capacity, accuracy, robustness and image quality, without needing to retrain the base models.
arXiv Detail & Related papers (2025-01-29T00:37:06Z) - Watermarking Training Data of Music Generation Models [6.902279764206365]
We investigate whether audio watermarking techniques can be used to detect unauthorized usage of content.
We compare outputs generated by a model trained on watermarked data to a model trained on non-watermarked data.
Our results show that audio watermarking techniques, including some that are imperceptible to humans, can lead to noticeable shifts in the model's outputs.
arXiv Detail & Related papers (2024-12-11T17:10:44Z) - A Training-Free Plug-and-Play Watermark Framework for Stable Diffusion [47.97443554073836]
Existing approaches involve training components or entire SDs to embed a watermark in generated images for traceability and responsibility attribution.
In the era of AI-generated content (AIGC), the rapid iteration of SDs renders retraining with watermark models costly.
We propose a training-free plug-and-play watermark framework for SDs.
arXiv Detail & Related papers (2024-04-08T15:29:46Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - Catch You Everything Everywhere: Guarding Textual Inversion via Concept Watermarking [67.60174799881597]
We propose the novel concept watermarking, where watermark information is embedded into the target concept and then extracted from generated images based on the watermarked concept.
In practice, the concept owner can upload his concept with different watermarks (ie, serial numbers) to the platform, and the platform allocates different users with different serial numbers for subsequent tracing and forensics.
arXiv Detail & Related papers (2023-09-12T03:33:13Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.