Your Fixed Watermark is Fragile: Towards Semantic-Aware Watermark for EaaS Copyright Protection
- URL: http://arxiv.org/abs/2411.09359v1
- Date: Thu, 14 Nov 2024 11:06:34 GMT
- Title: Your Fixed Watermark is Fragile: Towards Semantic-Aware Watermark for EaaS Copyright Protection
- Authors: Zekun Fei, Biao Yi, Jianing Geng, Ruiqi He, Lihai Nie, Zheli Liu,
- Abstract summary: Embedding-as-a-Service (E) has emerged as a successful business pattern but faces significant challenges related to copyright infringement.
Various studies have proposed backdoor-based watermarking schemes to protect the copyright of E services.
In this paper, we reveal that previous watermarking schemes possess semantic-independent characteristics.
- Score: 5.2431999629987
- License:
- Abstract: Embedding-as-a-Service (EaaS) has emerged as a successful business pattern but faces significant challenges related to various forms of copyright infringement, including API misuse and different attacks. Various studies have proposed backdoor-based watermarking schemes to protect the copyright of EaaS services. In this paper, we reveal that previous watermarking schemes possess semantic-independent characteristics and propose the Semantic Perturbation Attack (SPA). Our theoretical and experimental analyses demonstrate that this semantic-independent nature makes current watermarking schemes vulnerable to adaptive attacks that exploit semantic perturbations test to bypass watermark verification. To address this vulnerability, we propose the Semantic Aware Watermarking (SAW) scheme, a robust defense mechanism designed to resist SPA, by injecting a watermark that adapts to the text semantics. Extensive experimental results across multiple datasets demonstrate that the True Positive Rate (TPR) for detecting watermarked samples under SPA can reach up to more than 95%, rendering previous watermarks ineffective. Meanwhile, our watermarking scheme can resist such attack while ensuring the watermark verification capability. Our code is available at https://github.com/Zk4-ps/EaaS-Embedding-Watermark.
Related papers
- WaterPark: A Robustness Assessment of Language Model Watermarking [40.50648910458236]
We develop WaterPark, a unified platform that integrates 10 state-of-the-art watermarkers and 12 representative attacks.
We conduct a comprehensive assessment of existing watermarkers, unveiling the impact of various design choices on their attack robustness.
Using a generic detector alongside a watermark-specific detector improves the security of vulnerable watermarkers.
arXiv Detail & Related papers (2024-11-20T16:09:22Z) - ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark [50.08021440235581]
Embeds as a Service (Eding) is emerging as a crucial role in AI applications.
Eding is vulnerable to model extraction attacks, highlighting the urgent need for copyright protection.
We propose a novel embedding-specific watermarking (ESpeW) mechanism to offer robust copyright protection for Eding.
arXiv Detail & Related papers (2024-10-23T04:34:49Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - Steganalysis on Digital Watermarking: Is Your Defense Truly Impervious? [21.06493827123594]
steganalysis attacks can extract and remove the watermark with minimal perceptual distortion.
We show how averaging a collection of watermarked images could reveal the underlying watermark pattern.
We propose security guidelines calling for using content-adaptive watermarking strategies and performing security evaluation against steganalysis.
arXiv Detail & Related papers (2024-06-13T12:01:28Z) - WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection [7.660430606056949]
We propose a new protocol to make the removal of watermarks more challenging by incorporating multiple possible watermark directions.
Our defense approach, WARDEN, notably increases the stealthiness of watermarks and has been empirically shown to be effective against CSE attack.
arXiv Detail & Related papers (2024-03-03T10:39:27Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.