Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs
- URL: http://arxiv.org/abs/2310.16919v1
- Date: Wed, 25 Oct 2023 18:38:10 GMT
- Title: Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs
- Authors: Jianwei Fei, Zhihua Xia, Benedetta Tondi, Mauro Barni
- Abstract summary: We propose a novel multi-bit box-free watermarking method for GANs with improved robustness against white-box attacks.
The watermark is embedded by adding an extra watermarking loss term during GAN training.
We show that the presence of the watermark has a negligible impact on the quality of the generated images.
- Score: 23.639074918667625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel multi-bit box-free watermarking method for the protection
of Intellectual Property Rights (IPR) of GANs with improved robustness against
white-box attacks like fine-tuning, pruning, quantization, and surrogate model
attacks. The watermark is embedded by adding an extra watermarking loss term
during GAN training, ensuring that the images generated by the GAN contain an
invisible watermark that can be retrieved by a pre-trained watermark decoder.
In order to improve the robustness against white-box model-level attacks, we
make sure that the model converges to a wide flat minimum of the watermarking
loss term, in such a way that any modification of the model parameters does not
erase the watermark. To do so, we add random noise vectors to the parameters of
the generator and require that the watermarking loss term is as invariant as
possible with respect to the presence of noise. This procedure forces the
generator to converge to a wide flat minimum of the watermarking loss. The
proposed method is architectureand dataset-agnostic, thus being applicable to
many different generation tasks and models, as well as to CNN-based image
processing architectures. We present the results of extensive experiments
showing that the presence of the watermark has a negligible impact on the
quality of the generated images, and proving the superior robustness of the
watermark against model modification and surrogate model attacks.
Related papers
- An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation [21.058231817498115]
We propose an efficient watermarking method for latent diffusion models (LDMs) based on Low-Rank Adaptation (LoRA)
We show that the proposed method ensures fast watermark embedding and maintains a very low bit error rate of the watermark, a high-quality of the generated image, and a zero false negative rate (FNR) for verification.
arXiv Detail & Related papers (2024-10-26T15:23:49Z) - Trigger-Based Fragile Model Watermarking for Image Transformation Networks [2.38776871944507]
In fragile watermarking, a sensitive watermark is embedded in an object in a manner such that the watermark breaks upon tampering.
We introduce a novel, trigger-based fragile model watermarking system for image transformation/generation networks.
Our approach, distinct from robust watermarking, effectively verifies the model's source and integrity across various datasets and attacks.
arXiv Detail & Related papers (2024-09-28T19:34:55Z) - AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models [71.13610023354967]
Copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models.
We propose a diffusion model watermarking technique that is both performance-lossless and training-free.
arXiv Detail & Related papers (2024-04-07T13:30:10Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Piracy-Resistant DNN Watermarking by Block-Wise Image Transformation
with Secret Key [15.483078145498085]
The proposed method embeds a watermark pattern in a model by using learnable transformed images.
It is piracy-resistant, so the original watermark cannot be overwritten by a pirated watermark.
The results show that it was resilient against fine-tuning and pruning attacks while maintaining a high watermark-detection accuracy.
arXiv Detail & Related papers (2021-04-09T08:21:53Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.