Hey That's Mine Imperceptible Watermarks are Preserved in Diffusion
Generated Outputs
- URL: http://arxiv.org/abs/2308.11123v2
- Date: Thu, 9 Nov 2023 03:39:59 GMT
- Title: Hey That's Mine Imperceptible Watermarks are Preserved in Diffusion
Generated Outputs
- Authors: Luke Ditria, Tom Drummond
- Abstract summary: We show that a generative Diffusion model trained on data that has been imperceptibly watermarked will generate new images with these watermarks present.
Our system offers a solution to protect intellectual property when sharing content online.
- Score: 12.763826933561244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models have seen an explosion in popularity with the release of
huge generative Diffusion models like Midjourney and Stable Diffusion to the
public. Because of this new ease of access, questions surrounding the automated
collection of data and issues regarding content ownership have started to
build. In this paper we present new work which aims to provide ways of
protecting content when shared to the public. We show that a generative
Diffusion model trained on data that has been imperceptibly watermarked will
generate new images with these watermarks present. We further show that if a
given watermark is correlated with a certain feature of the training data, the
generated images will also have this correlation. Using statistical tests we
show that we are able to determine whether a model has been trained on marked
data, and what data was marked. As a result our system offers a solution to
protect intellectual property when sharing content online.
Related papers
- Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models [10.726987194250116]
We introduce Shallow Diffuse, a new watermarking technique that embeds robust and invisible watermarks into diffusion model outputs.
Our theoretical and empirical analyses show that Shallow Diffuse greatly enhances the consistency of data generation and the detectability of the watermark.
arXiv Detail & Related papers (2024-10-28T14:51:04Z) - AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - A Watermark-Conditioned Diffusion Model for IP Protection [31.969286898467985]
We propose a unified watermarking framework for content copyright protection within the context of diffusion models.
To tackle this challenge, we propose a Watermark-conditioned Diffusion model called WaDiff.
Our method is effective and robust in both the detection and owner identification tasks.
arXiv Detail & Related papers (2024-03-16T11:08:15Z) - Proving membership in LLM pretraining data via data watermarks [20.57538940552033]
This work proposes using data watermarks to enable principled detection with only black-box model access.
We study two watermarks: one that inserts random sequences, and another that randomly substitutes characters with Unicode lookalikes.
We show that we can robustly detect hashes from BLOOM-176B's training data, as long as they occurred at least 90 times.
arXiv Detail & Related papers (2024-02-16T18:49:27Z) - A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models [52.49582606341111]
Copyright law confers creators the exclusive rights to reproduce, distribute, and monetize their creative works.
Recent progress in text-to-image generation has introduced formidable challenges to copyright enforcement.
We introduce a novel pipeline that harmonizes CLIP, ChatGPT, and diffusion models to curate a dataset.
arXiv Detail & Related papers (2024-01-04T11:14:01Z) - Unbiased Watermark for Large Language Models [67.43415395591221]
This study examines how significantly watermarks impact the quality of model-generated outputs.
It is possible to integrate watermarks without affecting the output probability distribution.
The presence of watermarks does not compromise the performance of the model in downstream tasks.
arXiv Detail & Related papers (2023-09-22T12:46:38Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Did You Train on My Dataset? Towards Public Dataset Protection with
Clean-Label Backdoor Watermarking [54.40184736491652]
We propose a backdoor-based watermarking approach that serves as a general framework for safeguarding public-available data.
By inserting a small number of watermarking samples into the dataset, our approach enables the learning model to implicitly learn a secret function set by defenders.
This hidden function can then be used as a watermark to track down third-party models that use the dataset illegally.
arXiv Detail & Related papers (2023-03-20T21:54:30Z) - On Function-Coupled Watermarks for Deep Neural Networks [15.478746926391146]
We propose a novel DNN watermarking solution that can effectively defend against watermark removal attacks.
Our key insight is to enhance the coupling of the watermark and model functionalities.
Results show a 100% watermark authentication success rate under aggressive watermark removal attacks.
arXiv Detail & Related papers (2023-02-08T05:55:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.