Mitigating Watermark Forgery in Generative Models via Randomized Key Selection
- URL: http://arxiv.org/abs/2507.07871v3
- Date: Sat, 27 Sep 2025 07:12:29 GMT
- Title: Mitigating Watermark Forgery in Generative Models via Randomized Key Selection
- Authors: Toluwani Aremu, Noor Hussein, Munachiso Nwadike, Samuele Poppi, Jie Zhang, Karthik Nandakumar, Neil Gong, Nils Lukas,
- Abstract summary: A core security threat are forgery attacks where adversaries insert the provider's watermark into content.<n>Existing defenses resist forgery by embedding many watermarks with multiple keys into the same content.<n>We propose a defense that is provably forgery-resistant emphindependent of the number of watermarked content collected by the attacker.
- Score: 33.12939822735328
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Watermarking enables GenAI providers to verify whether content was generated by their models. A watermark is a hidden signal in the content, whose presence can be detected using a secret watermark key. A core security threat are forgery attacks, where adversaries insert the provider's watermark into content \emph{not} produced by the provider, potentially damaging their reputation and undermining trust. Existing defenses resist forgery by embedding many watermarks with multiple keys into the same content, which can degrade model utility. However, forgery remains a threat when attackers can collect sufficiently many watermarked samples. We propose a defense that is provably forgery-resistant \emph{independent} of the number of watermarked content collected by the attacker, provided they cannot easily distinguish watermarks from different keys. Our scheme does not further degrade model utility. We randomize the watermark key selection for each query and accept content as genuine only if a watermark is detected by \emph{exactly} one key. We focus on the image and text modalities, but our defense is modality-agnostic, since it treats the underlying watermarking method as a black-box. Our method provably bounds the attacker's success rate and we empirically observe a reduction from near-perfect success rates to only $2\%$ at negligible computational overhead.
Related papers
- Transferable Black-Box One-Shot Forging of Watermarks via Image Preference Models [42.902365202924535]
We investigate watermark forging in the context of widely used post-hoc image watermarking.<n>We introduce a preference model to assess whether an image is watermarked.<n>We demonstrate the model's capability to remove and forge watermarks by optimizing the input image through backpropagation.
arXiv Detail & Related papers (2025-10-23T12:06:35Z) - DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation [7.366060554733598]
We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model.<n>By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model.<n>This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones.
arXiv Detail & Related papers (2025-10-13T03:53:40Z) - WMCopier: Forging Invisible Image Watermarks on Arbitrary Images [21.17890218813236]
We propose WMCopier, an effective watermark forgery attack that operates without requiring prior knowledge of or access to the target watermarking algorithm.<n>Our approach first models the target watermark distribution using an unconditional diffusion model, and then seamlessly embeds the target watermark into a non-watermarked image.<n> Experimental results demonstrate that WMCopier effectively deceives both open-source and closed-source watermark systems.
arXiv Detail & Related papers (2025-03-28T11:11:19Z) - SEAL: Semantic Aware Image Watermarking [26.606008778795193]
We propose a novel watermarking method that embeds semantic information about the generated image directly into the watermark.<n>The key pattern can be inferred from the semantic embedding of the image using locality-sensitive hashing.<n>Our results suggest that content-aware watermarks can mitigate risks arising from image-generative models.
arXiv Detail & Related papers (2025-03-15T15:29:05Z) - Let Watermarks Speak: A Robust and Unforgeable Watermark for Language Models [0.0]
We propose an undetectable, robust, single-bit watermarking scheme.<n>It has a comparable robustness to the most advanced zero-bit watermarking schemes.
arXiv Detail & Related papers (2024-12-27T11:58:05Z) - RoboSignature: Robust Signature and Watermarking on Network Attacks [0.5461938536945723]
We present a novel adversarial fine-tuning attack that disrupts the model's ability to embed the intended watermark.<n>Our findings emphasize the importance of anticipating and defending against potential vulnerabilities in generative systems.
arXiv Detail & Related papers (2024-12-22T04:36:27Z) - Robust and Minimally Invasive Watermarking for EaaS [50.08021440235581]
Embeds as a Service (Eding) is emerging as a crucial role in AI applications.<n>Eding is vulnerable to model extraction attacks, highlighting the need for copyright protection.<n>We propose a novel embedding-specific watermarking (ESpeW) mechanism to offer robust copyright protection for Eding.
arXiv Detail & Related papers (2024-10-23T04:34:49Z) - An Undetectable Watermark for Generative Image Models [65.31658824274894]
We present the first undetectable watermarking scheme for generative image models.<n>In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric.<n>Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code.
arXiv Detail & Related papers (2024-10-09T18:33:06Z) - Watermark Smoothing Attacks against Language Models [40.02225709485305]
We introduce the Smoothing Attack, a novel watermark removal method.<n>We validate our attack on open-source models ranging from $1.3$B to $30$B parameters.
arXiv Detail & Related papers (2024-07-19T11:04:54Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - Steganalysis on Digital Watermarking: Is Your Defense Truly Impervious? [21.06493827123594]
steganalysis attacks can extract and remove the watermark with minimal perceptual distortion.
We show how averaging a collection of watermarked images could reveal the underlying watermark pattern.
We propose security guidelines calling for using content-adaptive watermarking strategies and performing security evaluation against steganalysis.
arXiv Detail & Related papers (2024-06-13T12:01:28Z) - Large Language Model Watermark Stealing With Mixed Integer Programming [51.336009662771396]
Large Language Model (LLM) watermark shows promise in addressing copyright, monitoring AI-generated text, and preventing its misuse.
Recent research indicates that watermarking methods using numerous keys are susceptible to removal attacks.
We propose a novel green list stealing attack against the state-of-the-art LLM watermark scheme.
arXiv Detail & Related papers (2024-05-30T04:11:17Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - On the Reliability of Watermarks for Large Language Models [95.87476978352659]
We study the robustness of watermarked text after it is re-written by humans, paraphrased by a non-watermarked LLM, or mixed into a longer hand-written document.
We find that watermarks remain detectable even after human and machine paraphrasing.
We also consider a range of new detection schemes that are sensitive to short spans of watermarked text embedded inside a large document.
arXiv Detail & Related papers (2023-06-07T17:58:48Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.