Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust
- URL: http://arxiv.org/abs/2305.20030v3
- Date: Tue, 4 Jul 2023 03:52:06 GMT
- Title: Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust
- Authors: Yuxin Wen, John Kirchenbauer, Jonas Geiping, Tom Goldstein
- Abstract summary: Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
- Score: 55.91987293510401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Watermarking the outputs of generative models is a crucial technique for
tracing copyright and preventing potential harm from AI-generated content. In
this paper, we introduce a novel technique called Tree-Ring Watermarking that
robustly fingerprints diffusion model outputs. Unlike existing methods that
perform post-hoc modifications to images after sampling, Tree-Ring Watermarking
subtly influences the entire sampling process, resulting in a model fingerprint
that is invisible to humans. The watermark embeds a pattern into the initial
noise vector used for sampling. These patterns are structured in Fourier space
so that they are invariant to convolutions, crops, dilations, flips, and
rotations. After image generation, the watermark signal is detected by
inverting the diffusion process to retrieve the noise vector, which is then
checked for the embedded signal. We demonstrate that this technique can be
easily applied to arbitrary diffusion models, including text-conditioned Stable
Diffusion, as a plug-in with negligible loss in FID. Our watermark is
semantically hidden in the image space and is far more robust than watermarking
alternatives that are currently deployed. Code is available at
https://github.com/YuxinWenRick/tree-ring-watermark.
Related papers
- Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models [10.726987194250116]
We introduce Shallow Diffuse, a new watermarking technique that embeds robust and invisible watermarks into diffusion model outputs.
Our theoretical and empirical analyses show that Shallow Diffuse greatly enhances the consistency of data generation and the detectability of the watermark.
arXiv Detail & Related papers (2024-10-28T14:51:04Z) - Stable Signature is Unstable: Removing Image Watermark from Diffusion Models [1.656188668325832]
We propose a new attack to remove the watermark from a diffusion model by fine-tuning it.
Our results show that our attack can effectively remove the watermark from a diffusion model such that its generated images are non-watermarked.
arXiv Detail & Related papers (2024-05-12T03:04:48Z) - DiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model [15.982765272033058]
Latent Diffusion Models (LDMs) enable a wide range of applications but raise ethical concerns regarding illegal utilization.
New technique called DiffuseTrace embeds invisible watermarks in all generated images for future detection semantically.
arXiv Detail & Related papers (2024-05-04T15:32:57Z) - Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models [71.13610023354967]
Copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models.
We propose a diffusion model watermarking technique that is both performance-lossless and training-free.
arXiv Detail & Related papers (2024-04-07T13:30:10Z) - Who Wrote this Code? Watermarking for Code Generation [53.24895162874416]
We propose Selective WatErmarking via Entropy Thresholding (SWEET) to detect machine-generated text.
Our experiments show that SWEET significantly improves code quality preservation while outperforming all baselines.
arXiv Detail & Related papers (2023-05-24T11:49:52Z) - The Stable Signature: Rooting Watermarks in Latent Diffusion Models [29.209892051477194]
This paper introduces an active strategy combining image watermarking and Latent Diffusion Models.
The goal is for all generated images to conceal an invisible watermark allowing for future detection and/or identification.
A pre-trained watermark extractor recovers the hidden signature from any generated image and a statistical test then determines whether it comes from the generative model.
arXiv Detail & Related papers (2023-03-27T17:57:33Z) - A Watermark for Large Language Models [84.95327142027183]
We propose a watermarking framework for proprietary language models.
The watermark can be embedded with negligible impact on text quality.
It can be detected using an efficient open-source algorithm without access to the language model API or parameters.
arXiv Detail & Related papers (2023-01-24T18:52:59Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z) - Towards transformation-resilient provenance detection of digital media [38.865642862858195]
We introduce ReSWAT, a framework for learning transformation-resilient watermark detectors.
Our method can reliably detect the provenance of a signal, even if it has been through several post-processing transformations.
arXiv Detail & Related papers (2020-11-14T18:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.