RAVEN: Erasing Invisible Watermarks via Novel View Synthesis
- URL: http://arxiv.org/abs/2601.08832v1
- Date: Tue, 13 Jan 2026 18:59:58 GMT
- Title: RAVEN: Erasing Invisible Watermarks via Novel View Synthesis
- Authors: Fahad Shamshad, Nils Lukas, Karthik Nandakumar,
- Abstract summary: In this work, we expose a fundamental vulnerability in invisible watermarks by reformulating watermark removal as a view synthesis problem.<n>Our key insight is that generating a perceptually consistent alternative view of the same semantic content, naturally removes the embedded watermark while preserving visual fidelity.<n>We introduce a zero-shot diffusion-based framework that applies controlled geometric transformations in latent space, augmented with view-guided correspondence attention to maintain structural consistency during reconstruction.
- Score: 35.417500510522835
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Invisible watermarking has become a critical mechanism for authenticating AI-generated image content, with major platforms deploying watermarking schemes at scale. However, evaluating the vulnerability of these schemes against sophisticated removal attacks remains essential to assess their reliability and guide robust design. In this work, we expose a fundamental vulnerability in invisible watermarks by reformulating watermark removal as a view synthesis problem. Our key insight is that generating a perceptually consistent alternative view of the same semantic content, akin to re-observing a scene from a shifted perspective, naturally removes the embedded watermark while preserving visual fidelity. This reveals a critical gap: watermarks robust to pixel-space and frequency-domain attacks remain vulnerable to semantic-preserving viewpoint transformations. We introduce a zero-shot diffusion-based framework that applies controlled geometric transformations in latent space, augmented with view-guided correspondence attention to maintain structural consistency during reconstruction. Operating on frozen pre-trained models without detector access or watermark knowledge, our method achieves state-of-the-art watermark suppression across 15 watermarking methods--outperforming 14 baseline attacks while maintaining superior perceptual quality across multiple datasets.
Related papers
- Vanishing Watermarks: Diffusion-Based Image Editing Undermines Robust Invisible Watermarking [3.583615559438432]
Powerful diffusion-based image generation and editing techniques now pose a new threat to robust invisible watermarking schemes.<n>We show that diffusion models can effectively erase robust watermarks even when those watermarks were designed to withstand conventional distortions.<n>We introduce a guided diffusion-based attack that explicitly targets the embedded watermark signal during generation, significantly degrading watermark detectability.
arXiv Detail & Related papers (2026-02-24T08:34:48Z) - RecoverMark: Robust Watermarking for Localization and Recovery of Manipulated Faces [16.612226216769262]
We propose RecoverMark, a watermarking framework that achieves robust manipulation localization, content recovery, and ownership verification simultaneously.<n>Our key insight is twofold. First, we exploit a critical real-world constraint: an adversary must preserve the background's semantic consistency to avoid visual detection.<n>Based on these insights, RecoverMark treats the protected face content itself as the watermark and embeds it into the surrounding background.
arXiv Detail & Related papers (2026-02-24T07:11:40Z) - On the Information-Theoretic Fragility of Robust Watermarking under Diffusion Editing [3.6210754412846327]
Powerful diffusion-based image generation and editing techniques pose a new threat to robust watermarking schemes.<n>We propose a guided diffusion attack algorithm that explicitly targets and erases watermark signals during generation.<n>We evaluate our approach on recent deep learning-based watermarking schemes and demonstrate near-zero watermark recovery rates after attack.
arXiv Detail & Related papers (2025-11-14T03:41:24Z) - Diffusion-Based Image Editing: An Unforeseen Adversary to Robust Invisible Watermarks [4.138397555991069]
Powerful diffusion-based image generation and editing models can inadvertently remove or distort embedded watermarks.<n>We present a theoretical and empirical analysis demonstrating that diffusion-based image editing can effectively break state-of-the-art robust watermarks.<n>We propose a diffusion-driven attack that uses generative image regeneration to erase watermarks from a given image.
arXiv Detail & Related papers (2025-11-05T16:20:29Z) - Diffusion-Based Image Editing for Breaking Robust Watermarks [4.273350357872755]
Powerful diffusion-based image generation and editing techniques pose a new threat to robust watermarking schemes.<n>We show that a diffusion-driven image regeneration'' process can erase embedded watermarks while preserving image content.<n>We introduce a novel guided diffusion attack that explicitly targets the watermark signal during generation, significantly degrading watermark detectability.
arXiv Detail & Related papers (2025-10-07T14:34:42Z) - Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible Watermark Removal [57.84348166457113]
We introduce a novel feature adapting framework that leverages the representation capacity of a pre-trained image inpainting model.<n>Our approach bridges the knowledge gap between image inpainting and watermark removal by fusing information of the residual background content beneath watermarks into the inpainting backbone model.<n>For relieving the dependence on high-quality watermark masks, we introduce a new training paradigm by utilizing coarse watermark masks to guide the inference process.
arXiv Detail & Related papers (2025-04-07T02:37:14Z) - Robust Watermarks Leak: Channel-Aware Feature Extraction Enables Adversarial Watermark Manipulation [21.41643665626451]
We propose an attack framework that extracts leakage of watermark patterns using a pre-trained vision model.<n>Unlike prior works requiring massive data or detector access, our method achieves both forgery and detection evasion with a single watermarked image.<n>Our work exposes the robustness-stealthiness paradox: current "robust" watermarks sacrifice security for distortion resistance, providing insights for future watermark design.
arXiv Detail & Related papers (2025-02-10T12:55:08Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning [68.00975867932331]
Existing watermark removal methods mainly rely on UNet with task-specific decoder branches.
We introduce an implicit joint learning paradigm to holistically integrate information from both branches.
The results demonstrate our approach's remarkable superiority, surpassing existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-08-20T07:56:34Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.