OSI: One-step Inversion Excels in Extracting Diffusion Watermarks
- URL: http://arxiv.org/abs/2602.09494v1
- Date: Tue, 10 Feb 2026 07:43:16 GMT
- Title: OSI: One-step Inversion Excels in Extracting Diffusion Watermarks
- Authors: Yuwei Chen, Zhenliang He, Jia Tang, Meina Kan, Shiguang Shan,
- Abstract summary: We propose One-step Inversion (OSI), a significantly faster and more accurate method for extracting Gaussian Shading style watermarks.<n>OSI reformulates watermark extraction as a learnable sign classification problem, which eliminates the need for precise regression of the initial noise.<n>Our OSI substantially outperforms the multi-step diffusion inversion method: it is 20x faster, achieves higher extraction accuracy, and doubles the watermark payload capacity.
- Score: 56.210696479553945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Watermarking is an important mechanism for provenance and copyright protection of diffusion-generated images. Training-free methods, exemplified by Gaussian Shading, embed watermarks into the initial noise of diffusion models with negligible impact on the quality of generated images. However, extracting this type of watermark typically requires multi-step diffusion inversion to obtain precise initial noise, which is computationally expensive and time-consuming. To address this issue, we propose One-step Inversion (OSI), a significantly faster and more accurate method for extracting Gaussian Shading style watermarks. OSI reformulates watermark extraction as a learnable sign classification problem, which eliminates the need for precise regression of the initial noise. Then, we initialize the OSI model from the diffusion backbone and finetune it on synthesized noise-image pairs with a sign classification objective. In this manner, the OSI model is able to accomplish the watermark extraction efficiently in only one step. Our OSI substantially outperforms the multi-step diffusion inversion method: it is 20x faster, achieves higher extraction accuracy, and doubles the watermark payload capacity. Extensive experiments across diverse schedulers, diffusion backbones, and cryptographic schemes consistently show improvements, demonstrating the generality of our OSI framework.
Related papers
- T2SMark: Balancing Robustness and Diversity in Noise-as-Watermark for Diffusion Models [89.29541056113442]
T2SMark is a two-stage watermarking scheme based on Tail-Truncated Sampling (TTS)<n>We evaluate T2SMark on diffusion models with both U-Net and DiT backbones.
arXiv Detail & Related papers (2025-10-25T16:55:55Z) - Diffusion-Based Image Editing for Breaking Robust Watermarks [4.273350357872755]
Powerful diffusion-based image generation and editing techniques pose a new threat to robust watermarking schemes.<n>We show that a diffusion-driven image regeneration'' process can erase embedded watermarks while preserving image content.<n>We introduce a novel guided diffusion attack that explicitly targets the watermark signal during generation, significantly degrading watermark detectability.
arXiv Detail & Related papers (2025-10-07T14:34:42Z) - DiffMark: Diffusion-based Robust Watermark Against Deepfakes [49.05095089309156]
Deepfakes pose significant security and privacy threats through malicious facial manipulations.<n>Existing watermarking methods often lack sufficient robustness against Deepfake manipulations.<n>We propose a novel robust watermarking framework based on diffusion model, called DiffMark.
arXiv Detail & Related papers (2025-07-02T07:29:33Z) - TAG-WM: Tamper-Aware Generative Image Watermarking via Diffusion Inversion Sensitivity [76.98973481600002]
This paper proposes a Tamper-Aware Generative image WaterMarking method named TAG-WM.<n>The proposed method comprises four key modules: a dual-mark joint sampling (DMJS) algorithm for embedding copyright and localization watermarks into the latent space while preserving generative quality.<n>The experimental results demonstrate that TAG-WM achieves state-of-the-art performance in both tampering robustness and localization capability even under distortion.
arXiv Detail & Related papers (2025-06-30T03:14:07Z) - GaussMarker: Robust Dual-Domain Watermark for Diffusion Models [9.403937469402871]
GaussMarker efficiently achieves state-of-the-art performance under eight image distortions and four advanced attacks across three versions of Stable Diffusion.<n>This paper presents the first dual-domain DM watermarking approach using a pipelined injector to consistently embed watermarks in both the spatial and frequency domains.
arXiv Detail & Related papers (2025-06-13T03:45:15Z) - Optimization-Free Universal Watermark Forgery with Regenerative Diffusion Models [50.73220224678009]
Watermarking can be used to verify the origin of synthetic images generated by artificial intelligence models.<n>Recent studies demonstrate the capability to forge watermarks from a target image onto cover images via adversarial techniques.<n>In this paper, we uncover a greater risk of an optimization-free and universal watermark forgery.<n>Our approach significantly broadens the scope of attacks, presenting a greater challenge to the security of current watermarking techniques.
arXiv Detail & Related papers (2025-06-06T12:08:02Z) - Watermarking in Diffusion Model: Gaussian Shading with Exact Diffusion Inversion via Coupled Transformations (EDICT) [0.0]
This paper introduces a novel approach to enhance the performance of Gaussian Shading.<n>We propose to leverage EDICT's ability to derive exact inverse mappings to refine this process.<n>Our method involves duplicating the watermark-infused noisy latent and employing a reciprocal, alternating denoising and noising scheme.
arXiv Detail & Related papers (2025-01-15T06:04:18Z) - SuperMark: Robust and Training-free Image Watermarking via Diffusion-based Super-Resolution [27.345134138673945]
We propose SuperMark, a robust, training-free watermarking framework.<n>SuperMark embeds the watermark into initial Gaussian noise using existing techniques.<n>It then applies pre-trained Super-Resolution models to denoise the watermarked noise, producing the final watermarked image.<n>For extraction, the process is reversed: the watermarked image is inverted back to the initial watermarked noise via DDIM Inversion, from which the embedded watermark is extracted.<n>Experiments demonstrate that SuperMark achieves fidelity comparable to existing methods while significantly improving robustness.
arXiv Detail & Related papers (2024-12-13T11:20:59Z) - Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach [53.32564762183639]
We introduce a novel, unified theoretical framework for watermarking Large Language Models (LLMs)<n>Our approach aims to maximize detection performance while maintaining control over the worst-case false positive rate (FPR) and distortion on text quality.<n>We propose a distortion-free, distribution-adaptive watermarking algorithm (DAWA) that leverages a surrogate model for model-agnosticism and efficiency.
arXiv Detail & Related papers (2024-10-03T18:28:10Z) - JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits [76.25962336540226]
JIGMARK is a first-of-its-kind watermarking technique that enhances robustness through contrastive learning.
Our evaluation reveals that JIGMARK significantly surpasses existing watermarking solutions in resilience to diffusion-model edits.
arXiv Detail & Related papers (2024-06-06T03:31:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.