Resilient Watermarking for LLM-Generated Codes
- URL: http://arxiv.org/abs/2402.07518v2
- Date: Tue, 16 Apr 2024 07:27:06 GMT
- Title: Resilient Watermarking for LLM-Generated Codes
- Authors: Boquan Li, Mengdi Zhang, Peixin Zhang, Jun Sun, Xingmei Wang, Zijian Liu, Tianzi Zhang,
- Abstract summary: It is desirable to know whether a piece of code is generated by AI, and which AI is the author.
Existing approaches are not satisfactory as watermarking codes are more challenging compared to watermarking text data.
We propose ACW (AI Code Watermarking), a novel method for watermarking AI-generated codes.
- Score: 9.66163808660033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of large language models, multiple AIs are now made available for code generation (such as ChatGPT and StarCoder) and are adopted widely. It is often desirable to know whether a piece of code is generated by AI, and furthermore, which AI is the author. For instance, if a certain version of AI is known to generate vulnerable codes, it is particularly important to know the creator. Existing approaches are not satisfactory as watermarking codes are more challenging compared to watermarking text data, as codes can be altered with relative ease via widely-used code refactoring methods. In this work, we propose ACW (AI Code Watermarking), a novel method for watermarking AI-generated codes. The key idea of ACW is to selectively apply a set of carefully-designed semantic-preserving, idempotent code transformations, whose presence (or absence) allows us to determine the existence of the watermark. It is efficient as it requires no training or fine-tuning and works in a black-box manner. It is resilient as the watermark cannot be easily removed or tampered through common code refactoring methods. Our experimental results show that ACW is effective (i.e., achieving high accuracy, true positive rates and false positive rates) and resilient, significantly outperforming existing approaches.
Related papers
- Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code [59.32609948217718]
We present CodeIP, a new watermarking technique for Large Language Models (LLMs)-based code generation.
CodeIP enables the insertion of multi-bit information while preserving the semantics of the generated code.
arXiv Detail & Related papers (2024-04-24T04:25:04Z) - No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices [20.20770405297239]
We show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack.
We propose guidelines and defenses for LLM watermarking in practice.
arXiv Detail & Related papers (2024-02-25T20:24:07Z) - A Robust Semantics-based Watermark for Large Language Model against Paraphrasing [50.84892876636013]
Large language models (LLMs) have show great ability in various natural language tasks.
There are concerns that LLMs are possible to be used improperly or even illegally.
We propose a semantics-based watermark framework SemaMark.
arXiv Detail & Related papers (2023-11-15T06:19:02Z) - An Unforgeable Publicly Verifiable Watermark for Large Language Models [84.2805275589553]
Current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection.
We propose an unforgeable publicly verifiable watermark algorithm named UPV that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages.
arXiv Detail & Related papers (2023-07-30T13:43:27Z) - Who Wrote this Code? Watermarking for Code Generation [53.24895162874416]
We propose Selective WatErmarking via Entropy Thresholding (SWEET) to detect machine-generated text.
Our experiments show that SWEET significantly improves code quality preservation while outperforming all baselines.
arXiv Detail & Related papers (2023-05-24T11:49:52Z) - Towards Tracing Code Provenance with Code Watermarking [37.41260851333952]
We propose CodeMark, a watermarking system that hides bit strings into variables respecting the natural and operational semantics of the code.
For naturalness, we introduce a contextual watermarking scheme to generate watermarked variables more coherent in the context atop graph neural networks.
We show CodeMark outperforms the SOTA watermarking systems with a better balance of the watermarking requirements.
arXiv Detail & Related papers (2023-05-21T13:53:12Z) - Evading Watermark based Detection of AI-Generated Content [45.47476727209842]
A generative AI model can generate extremely realistic-looking content.
Watermark has been leveraged to detect AI-generated content.
A content is detected as AI-generated if a similar watermark can be decoded from it.
arXiv Detail & Related papers (2023-05-05T19:20:29Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.