SIGMark: Scalable In-Generation Watermark with Blind Extraction for Video Diffusion
- URL: http://arxiv.org/abs/2603.02882v1
- Date: Tue, 03 Mar 2026 11:33:44 GMT
- Title: SIGMark: Scalable In-Generation Watermark with Blind Extraction for Video Diffusion
- Authors: Xinjie Zhu, Zijing Zhao, Hui Jin, Qingxiao Guo, Yilong Ma, Yunhao Wang, Xiaobing Guo, Weifeng Zhang,
- Abstract summary: Invisible watermarking is a key technology for protecting AI-generated videos and tracing harmful content, and plays a crucial role in AI safety.<n>Existing in-generation approaches are non-blind, requiring maintaining all the message-key pairs and performing template-based matching during extraction.<n>We propose SIGMark, a Scalable In-Generation watermarking framework with blind extraction for video diffusion.
- Score: 11.934813439152528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence Generated Content (AIGC), particularly video generation with diffusion models, has been advanced rapidly. Invisible watermarking is a key technology for protecting AI-generated videos and tracing harmful content, and thus plays a crucial role in AI safety. Beyond post-processing watermarks which inevitably degrade video quality, recent studies have proposed distortion-free in-generation watermarking for video diffusion models. However, existing in-generation approaches are non-blind: they require maintaining all the message-key pairs and performing template-based matching during extraction, which incurs prohibitive computational costs at scale. Moreover, when applied to modern video diffusion models with causal 3D Variational Autoencoders (VAEs), their robustness against temporal disturbance becomes extremely weak. To overcome these challenges, we propose SIGMark, a Scalable In-Generation watermarking framework with blind extraction for video diffusion. To achieve blind-extraction, we propose to generate watermarked initial noise using a Global set of Frame-wise PseudoRandom Coding keys (GF-PRC), reducing the cost of storing large-scale information while preserving noise distribution and diversity for distortion-free watermarking. To enhance robustness, we further design a Segment Group-Ordering module (SGO) tailored to causal 3D VAEs, ensuring robust watermark inversion during extraction under temporal disturbance. Comprehensive experiments on modern diffusion models show that SIGMark achieves very high bit-accuracy during extraction under both temporal and spatial disturbances with minimal overhead, demonstrating its scalability and robustness. Our project is available at https://jeremyzhao1998.github.io/SIGMark-release/.
Related papers
- SKeDA: A Generative Watermarking Framework for Text-to-video Diffusion Models [40.540302276054376]
We propose a generative watermarking framework tailored for text-to-video diffusion models.<n> SKeDA consists of two components: (1) Shuffle-Key-based Distribution-preserving Sampling (SKe) employs a single base pseudo-random binary sequence for watermark encryption and derives frame-level encryption sequences through permutation.<n>Extensive experiments demonstrate that SKeDA preserves high video generation quality and watermark robustness.
arXiv Detail & Related papers (2026-02-27T06:18:03Z) - OSI: One-step Inversion Excels in Extracting Diffusion Watermarks [56.210696479553945]
We propose One-step Inversion (OSI), a significantly faster and more accurate method for extracting Gaussian Shading style watermarks.<n>OSI reformulates watermark extraction as a learnable sign classification problem, which eliminates the need for precise regression of the initial noise.<n>Our OSI substantially outperforms the multi-step diffusion inversion method: it is 20x faster, achieves higher extraction accuracy, and doubles the watermark payload capacity.
arXiv Detail & Related papers (2026-02-10T07:43:16Z) - RDSplat: Robust Watermarking Against Diffusion Editing for 3D Gaussian Splatting [86.86361440345861]
3DGS watermarking methods remain highly vulnerable to diffusion-based editing.<n>This paper introduces RDSplat, a Robust watermarking paradigm against diffusion-based editing.<n> RDSplat embeds watermarks into 3DGS components that diffusion-based editing inherently preserve.
arXiv Detail & Related papers (2025-12-07T10:26:35Z) - T2SMark: Balancing Robustness and Diversity in Noise-as-Watermark for Diffusion Models [89.29541056113442]
T2SMark is a two-stage watermarking scheme based on Tail-Truncated Sampling (TTS)<n>We evaluate T2SMark on diffusion models with both U-Net and DiT backbones.
arXiv Detail & Related papers (2025-10-25T16:55:55Z) - DiffMark: Diffusion-based Robust Watermark Against Deepfakes [49.05095089309156]
Deepfakes pose significant security and privacy threats through malicious facial manipulations.<n>Existing watermarking methods often lack sufficient robustness against Deepfake manipulations.<n>We propose a novel robust watermarking framework based on diffusion model, called DiffMark.
arXiv Detail & Related papers (2025-07-02T07:29:33Z) - Video Signature: In-generation Watermarking for Latent Video Diffusion Models [42.064769031646904]
Video Signature (VID SIG) is an in-generation watermarking method for latent video diffusion models.<n>We achieve this by partially fine-tuning the latent decoder, where Perturbation-Aware Suppression (PAS) pre-identifies and freezes perceptually sensitive layers.<n> Experimental results show that VID SIG achieves the best overall performance in watermark extraction, visual quality, and generation efficiency.
arXiv Detail & Related papers (2025-05-31T17:43:54Z) - VideoMark: A Distortion-Free Robust Watermarking Framework for Video Diffusion Models [18.427936201177122]
VideoMark is a distortion-free robust watermarking framework for video diffusion models.<n>We employ a frame-wise watermarking strategy with pseudorandom error correction (PRC) codes, using a fixed watermark sequence.<n>For watermark extraction, we propose a Temporal Matching Module (TMM) that leverages edit distance to align decoded messages with the original watermark sequence.
arXiv Detail & Related papers (2025-04-23T02:21:12Z) - Gaussian Shading++: Rethinking the Realistic Deployment Challenge of Performance-Lossless Image Watermark for Diffusion Models [66.54457339638004]
Copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models.<n>We propose a diffusion model watermarking method tailored for real-world deployment.<n>Gaussian Shading++ not only maintains performance losslessness but also outperforms existing methods in terms of robustness.
arXiv Detail & Related papers (2025-04-21T11:18:16Z) - LVMark: Robust Watermark for Latent Video Diffusion Models [13.85241328100336]
We introduce LVMark, a novel watermarking method for video diffusion models.<n>We propose a new watermark decoder tailored for generated videos by learning the consistency between adjacent frames.<n>We optimize both the watermark decoder and the latent decoder of diffusion model, effectively balancing the trade-off between visual quality and bit accuracy.
arXiv Detail & Related papers (2024-12-12T09:57:20Z) - SleeperMark: Towards Robust Watermark against Fine-Tuning Text-to-image Diffusion Models [77.80595722480074]
SleeperMark is a framework designed to embed resilient watermarks into T2I diffusion models.<n>It guides the model to disentangle the watermark information from the semantic concepts it learns.<n>Our experiments demonstrate the effectiveness of SleeperMark across various types of diffusion models.
arXiv Detail & Related papers (2024-12-06T08:44:18Z) - Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs [23.639074918667625]
We propose a novel multi-bit box-free watermarking method for GANs with improved robustness against white-box attacks.
The watermark is embedded by adding an extra watermarking loss term during GAN training.
We show that the presence of the watermark has a negligible impact on the quality of the generated images.
arXiv Detail & Related papers (2023-10-25T18:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.