PRO: Enabling Precise and Robust Text Watermark for Open-Source LLMs
- URL: http://arxiv.org/abs/2510.23891v1
- Date: Mon, 27 Oct 2025 22:00:49 GMT
- Title: PRO: Enabling Precise and Robust Text Watermark for Open-Source LLMs
- Authors: Jiaqi Xue, Yifei Zhao, Mansour Al Ghanim, Shangqian Gao, Ruimin Sun, Qian Lou, Mengxin Zheng,
- Abstract summary: We propose PRO, a Precise and Robust text watermarking method for open-source models.<n>Pro substantially improves both watermark detectability and resilience to model modifications.
- Score: 33.70483974998233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text watermarking for large language models (LLMs) enables model owners to verify text origin and protect intellectual property. While watermarking methods for closed-source LLMs are relatively mature, extending them to open-source models remains challenging, as developers cannot control the decoding process. Consequently, owners of open-source LLMs lack practical means to verify whether text was generated by their models. A core difficulty lies in embedding watermarks directly into model weights without hurting detectability. A promising idea is to distill watermarks from a closed-source model into an open one, but this suffers from (i) poor detectability due to mismatch between learned and predefined patterns, and (ii) fragility to downstream modifications such as fine-tuning or model merging. To overcome these limitations, we propose PRO, a Precise and Robust text watermarking method for open-source LLMs. PRO jointly trains a watermark policy model with the LLM, producing patterns that are easier for the model to learn and more consistent with detection criteria. A regularization term further simulates downstream perturbations and penalizes degradation in watermark detectability, ensuring robustness under model edits. Experiments on open-source LLMs (e.g., LLaMA-3.2, LLaMA-3, Phi-2) show that PRO substantially improves both watermark detectability and resilience to model modifications.
Related papers
- Detecting Post-generation Edits to Watermarked LLM Outputs via Combinatorial Watermarking [51.417096446156926]
We introduce a new task: detecting post-generation edits locally made to watermarked LLM outputs.<n>We propose a pattern-based watermarking framework, which partitions the vocabulary into disjoint subsets and embeds the watermark.<n>We evaluate our method on open-source LLMs across a variety of editing scenarios, demonstrating strong empirical performance in edit localization.
arXiv Detail & Related papers (2025-10-02T03:33:12Z) - Mark Your LLM: Detecting the Misuse of Open-Source Large Language Models via Watermarking [40.951792492059646]
This work defines two misuse scenarios for open-source large language models (LLMs)<n>We explore the application of inference-time watermark distillation and backdoor watermarking in these contexts.<n>Our experiments reveal that backdoor watermarking could effectively detect IP Violation, while inference-time watermark distillation is applicable in both scenarios.
arXiv Detail & Related papers (2025-03-06T17:24:06Z) - Towards Watermarking of Open-Source LLMs [1.9374282535132377]
We lay the foundation for systematic study of open-source LLM watermarking.<n>For the first time, we explicitly formulate key requirements, including durability against common model modifications.
arXiv Detail & Related papers (2025-02-14T19:41:23Z) - GaussMark: A Practical Approach for Structural Watermarking of Language Models [61.84270985214254]
GaussMark is a simple, efficient, and relatively robust scheme for watermarking large language models.<n>We show that GaussMark is reliable, efficient, and relatively robust to corruptions such as insertions, deletions, substitutions, and roundtrip translations.
arXiv Detail & Related papers (2025-01-17T22:30:08Z) - Provably Robust Watermarks for Open-Source Language Models [5.509756888700397]
We introduce the first watermarking scheme for open-source language models.
Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model.
Perhaps surprisingly, we prove that our watermarks are unremovable under certain assumptions about the adversary's knowledge.
arXiv Detail & Related papers (2024-10-24T15:44:34Z) - WAPITI: A Watermark for Finetuned Open-Source LLMs [42.1087852764299]
WAPITI is a new method that transfers watermarking from base models to fine-tuned models through parameter integration.<n>We show that our method can successfully inject watermarks and is highly compatible with fine-tuned models.
arXiv Detail & Related papers (2024-10-09T01:41:14Z) - Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach [53.32564762183639]
We introduce a novel, unified theoretical framework for watermarking Large Language Models (LLMs)<n>Our approach aims to maximize detection performance while maintaining control over the worst-case false positive rate (FPR) and distortion on text quality.<n>We propose a distortion-free, distribution-adaptive watermarking algorithm (DAWA) that leverages a surrogate model for model-agnosticism and efficiency.
arXiv Detail & Related papers (2024-10-03T18:28:10Z) - DALD: Improving Logits-based Detector without Logits from Black-box LLMs [56.234109491884126]
Large Language Models (LLMs) have revolutionized text generation, producing outputs that closely mimic human writing.
We present Distribution-Aligned LLMs Detection (DALD), an innovative framework that redefines the state-of-the-art performance in black-box text detection.
DALD is designed to align the surrogate model's distribution with that of unknown target LLMs, ensuring enhanced detection capability and resilience against rapid model iterations.
arXiv Detail & Related papers (2024-06-07T19:38:05Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.<n> adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.<n> Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models [20.44680783275184]
Current watermarking techniques against model extraction attacks rely on signal insertion in model logits or post-processing of generated text.
We propose a novel method for embedding learnable linguistic watermarks in Large Language Models (LLMs)
Our approach subtly modifies the LLM's output distribution by introducing controlled noise into token frequency distributions, embedding a statistically identifiable watermark.
arXiv Detail & Related papers (2024-04-28T14:45:53Z) - Double-I Watermark: Protecting Model Copyright for LLM Fine-tuning [45.09125828947013]
The proposed approach effectively injects specific watermarking information into the customized model during fine-tuning.
We evaluate the proposed "Double-I watermark" under various fine-tuning methods, demonstrating its harmlessness, robustness, uniqueness, imperceptibility, and validity through both quantitative and qualitative analyses.
arXiv Detail & Related papers (2024-02-22T04:55:14Z) - Unbiased Watermark for Large Language Models [67.43415395591221]
This study examines how significantly watermarks impact the quality of model-generated outputs.
It is possible to integrate watermarks without affecting the output probability distribution.
The presence of watermarks does not compromise the performance of the model in downstream tasks.
arXiv Detail & Related papers (2023-09-22T12:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.