Invisible Watermarking for Audio Generation Diffusion Models
- URL: http://arxiv.org/abs/2309.13166v2
- Date: Tue, 31 Oct 2023 20:46:09 GMT
- Title: Invisible Watermarking for Audio Generation Diffusion Models
- Authors: Xirong Cao, Xiang Li, Divyesh Jadav, Yanzhao Wu, Zhehui Chen, Chen
Zeng, Wenqi Wei
- Abstract summary: This paper presents the first watermarking technique applied to audio diffusion models trained on mel-spectrograms.
Our model excels not only in benign audio generation, but also incorporates an invisible watermarking trigger mechanism for model verification.
- Score: 11.901028740065662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have gained prominence in the image domain for their
capabilities in data generation and transformation, achieving state-of-the-art
performance in various tasks in both image and audio domains. In the rapidly
evolving field of audio-based machine learning, safeguarding model integrity
and establishing data copyright are of paramount importance. This paper
presents the first watermarking technique applied to audio diffusion models
trained on mel-spectrograms. This offers a novel approach to the aforementioned
challenges. Our model excels not only in benign audio generation, but also
incorporates an invisible watermarking trigger mechanism for model
verification. This watermark trigger serves as a protective layer, enabling the
identification of model ownership and ensuring its integrity. Through extensive
experiments, we demonstrate that invisible watermark triggers can effectively
protect against unauthorized modifications while maintaining high utility in
benign audio generation tasks.
Related papers
- Trigger-Based Fragile Model Watermarking for Image Transformation Networks [2.38776871944507]
In fragile watermarking, a sensitive watermark is embedded in an object in a manner such that the watermark breaks upon tampering.
We introduce a novel, trigger-based fragile model watermarking system for image transformation/generation networks.
Our approach, distinct from robust watermarking, effectively verifies the model's source and integrity across various datasets and attacks.
arXiv Detail & Related papers (2024-09-28T19:34:55Z) - Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending [54.26862913139299]
We introduce a novel framework Towards Effective user Attribution for latent diffusion models via Watermark-Informed Blending (TEAWIB)
TEAWIB incorporates a unique ready-to-use configuration approach that allows seamless integration of user-specific watermarks into generative models.
Experiments validate the effectiveness of TEAWIB, showcasing the state-of-the-art performance in perceptual quality and attribution accuracy.
arXiv Detail & Related papers (2024-09-17T07:52:09Z) - GROOT: Generating Robust Watermark for Diffusion-Model-Based Audio Synthesis [37.065509936285466]
This paper proposes the generative robust audio watermarking method (Groot)
In this paradigm, the processes of watermark generation and audio synthesis occur simultaneously.
Groot exhibits exceptional robustness when facing compound attacks, maintaining an average watermark extraction accuracy of around 95%.
arXiv Detail & Related papers (2024-07-15T06:57:19Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models [71.13610023354967]
Copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models.
We propose a diffusion model watermarking technique that is both performance-lossless and training-free.
arXiv Detail & Related papers (2024-04-07T13:30:10Z) - A Watermark-Conditioned Diffusion Model for IP Protection [31.969286898467985]
We propose a unified watermarking framework for content copyright protection within the context of diffusion models.
To tackle this challenge, we propose a Watermark-conditioned Diffusion model called WaDiff.
Our method is effective and robust in both the detection and owner identification tasks.
arXiv Detail & Related papers (2024-03-16T11:08:15Z) - Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs [23.639074918667625]
We propose a novel multi-bit box-free watermarking method for GANs with improved robustness against white-box attacks.
The watermark is embedded by adding an extra watermarking loss term during GAN training.
We show that the presence of the watermark has a negligible impact on the quality of the generated images.
arXiv Detail & Related papers (2023-10-25T18:38:10Z) - Unified High-binding Watermark for Unconditional Image Generation Models [7.4037644261198885]
An attacker can steal the output images of the target model and use them as part of the training data to train a private surrogate UIG model.
We propose a two-stage unified watermark verification mechanism with high-binding effects.
Experiments demonstrate our method can complete the verification work with almost zero false positive rate.
arXiv Detail & Related papers (2023-10-14T03:26:21Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.