Towards Tracing Code Provenance with Code Watermarking
- URL: http://arxiv.org/abs/2305.12461v1
- Date: Sun, 21 May 2023 13:53:12 GMT
- Title: Towards Tracing Code Provenance with Code Watermarking
- Authors: Wei Li, Borui Yang, Yujie Sun, Suyu Chen, Ziyun Song, Liyao Xiang,
Xinbing Wang, Chenghu Zhou
- Abstract summary: We propose CodeMark, a watermarking system that hides bit strings into variables respecting the natural and operational semantics of the code.
For naturalness, we introduce a contextual watermarking scheme to generate watermarked variables more coherent in the context atop graph neural networks.
We show CodeMark outperforms the SOTA watermarking systems with a better balance of the watermarking requirements.
- Score: 37.41260851333952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in large language models have raised wide concern in
generating abundant plausible source code without scrutiny, and thus tracing
the provenance of code emerges as a critical issue. To solve the issue, we
propose CodeMark, a watermarking system that hides bit strings into variables
respecting the natural and operational semantics of the code. For naturalness,
we novelly introduce a contextual watermarking scheme to generate watermarked
variables more coherent in the context atop graph neural networks. Each
variable is treated as a node on the graph and the node feature gathers
neighborhood (context) information through learning. Watermarks embedded into
the features are thus reflected not only by the variables but also by the local
contexts. We further introduce a pre-trained model on source code as a teacher
to guide more natural variable generation. Throughout the embedding, the
operational semantics are preserved as only variable names are altered. Beyond
guaranteeing code-specific properties, CodeMark is superior in watermarking
accuracy, capacity, and efficiency due to a more diversified pattern generated.
Experimental results show CodeMark outperforms the SOTA watermarking systems
with a better balance of the watermarking requirements.
Related papers
- Improved Unbiased Watermark for Large Language Models [59.00698153097887]
We introduce MCmark, a family of unbiased, Multi-Channel-based watermarks.
MCmark preserves the original distribution of the language model.
It offers significant improvements in detectability and robustness over existing unbiased watermarks.
arXiv Detail & Related papers (2025-02-16T21:02:36Z) - GaussMark: A Practical Approach for Structural Watermarking of Language Models [61.84270985214254]
GaussMark is a simple, efficient, and relatively robust scheme for watermarking large language models.
We show that GaussMark is reliable, efficient, and relatively robust to corruptions such as insertions, deletions, substitutions, and roundtrip translations.
arXiv Detail & Related papers (2025-01-17T22:30:08Z) - Watermarking Language Models with Error Correcting Codes [39.77377710480125]
We propose a watermarking framework that encodes statistical signals through an error correcting code.
Our method, termed robust binary code (RBC) watermark, introduces no distortion compared to the original probability distribution.
Our empirical findings suggest our watermark is fast, powerful, and robust, comparing favorably to the state-of-the-art.
arXiv Detail & Related papers (2024-06-12T05:13:09Z) - CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code [56.019447113206006]
Large Language Models (LLMs) have achieved remarkable progress in code generation.
CodeIP is a novel multi-bit watermarking technique that inserts additional information to preserve provenance details.
Experiments conducted on a real-world dataset across five programming languages demonstrate the effectiveness of CodeIP.
arXiv Detail & Related papers (2024-04-24T04:25:04Z) - Is The Watermarking Of LLM-Generated Code Robust? [5.48277165801539]
We show that watermarking techniques are significantly more fragile in code-based contexts.
Specifically, we show that simple semantic-preserving transformations, such as variable renaming and dead code insertion, can effectively erase watermarks.
arXiv Detail & Related papers (2024-03-24T21:41:29Z) - Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models [31.062753031312006]
Large language models generate high-quality responses with potential misinformation.
Watermarking is pivotal in this context, which involves embedding hidden markers in texts.
We introduce a novel multi-objective optimization (MOO) approach for watermarking.
Our method simultaneously achieves detectability and semantic integrity.
arXiv Detail & Related papers (2024-02-28T05:43:22Z) - Improving the Generation Quality of Watermarked Large Language Models
via Word Importance Scoring [81.62249424226084]
Token-level watermarking inserts watermarks in the generated texts by altering the token probability distributions.
This watermarking algorithm alters the logits during generation, which can lead to a downgraded text quality.
We propose to improve the quality of texts generated by a watermarked language model by Watermarking with Importance Scoring (WIS)
arXiv Detail & Related papers (2023-11-16T08:36:00Z) - Who Wrote this Code? Watermarking for Code Generation [53.24895162874416]
We propose Selective WatErmarking via Entropy Thresholding (SWEET) to detect machine-generated text.
Our experiments show that SWEET significantly improves code quality preservation while outperforming all baselines.
arXiv Detail & Related papers (2023-05-24T11:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.