WET: Overcoming Paraphrasing Vulnerabilities in Embeddings-as-a-Service with Linear Transformation Watermarks
- URL: http://arxiv.org/abs/2409.04459v1
- Date: Thu, 29 Aug 2024 18:59:56 GMT
- Title: WET: Overcoming Paraphrasing Vulnerabilities in Embeddings-as-a-Service with Linear Transformation Watermarks
- Authors: Anudeex Shetty, Qiongkai Xu, Jey Han Lau,
- Abstract summary: We show that existing E watermarks can be removed by paraphrasing when attackers clone the model.
We propose a novel watermarking technique that involves linearly transforming the embeddings.
- Score: 28.992750031041744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Embeddings-as-a-Service (EaaS) is a service offered by large language model (LLM) developers to supply embeddings generated by LLMs. Previous research suggests that EaaS is prone to imitation attacks -- attacks that clone the underlying EaaS model by training another model on the queried embeddings. As a result, EaaS watermarks are introduced to protect the intellectual property of EaaS providers. In this paper, we first show that existing EaaS watermarks can be removed by paraphrasing when attackers clone the model. Subsequently, we propose a novel watermarking technique that involves linearly transforming the embeddings, and show that it is empirically and theoretically robust against paraphrasing.
Related papers
- Your Fixed Watermark is Fragile: Towards Semantic-Aware Watermark for EaaS Copyright Protection [5.2431999629987]
Embedding-as-a-Service (E) has emerged as a successful business pattern but faces significant challenges related to copyright infringement.
Various studies have proposed backdoor-based watermarking schemes to protect the copyright of E services.
In this paper, we reveal that previous watermarking schemes possess semantic-independent characteristics.
arXiv Detail & Related papers (2024-11-14T11:06:34Z) - ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark [50.08021440235581]
Embeds as a Service (Eding) is emerging as a crucial role in AI applications.
Eding is vulnerable to model extraction attacks, highlighting the urgent need for copyright protection.
We propose a novel embedding-specific watermarking (ESpeW) mechanism to offer robust copyright protection for Eding.
arXiv Detail & Related papers (2024-10-23T04:34:49Z) - Large Language Model Watermark Stealing With Mixed Integer Programming [51.336009662771396]
Large Language Model (LLM) watermark shows promise in addressing copyright, monitoring AI-generated text, and preventing its misuse.
Recent research indicates that watermarking methods using numerous keys are susceptible to removal attacks.
We propose a novel green list stealing attack against the state-of-the-art LLM watermark scheme.
arXiv Detail & Related papers (2024-05-30T04:11:17Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection [7.660430606056949]
We propose a new protocol to make the removal of watermarks more challenging by incorporating multiple possible watermark directions.
Our defense approach, WARDEN, notably increases the stealthiness of watermarks and has been empirically shown to be effective against CSE attack.
arXiv Detail & Related papers (2024-03-03T10:39:27Z) - No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices [20.20770405297239]
We show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack.
We propose guidelines and defenses for LLM watermarking in practice.
arXiv Detail & Related papers (2024-02-25T20:24:07Z) - Watermarking Vision-Language Pre-trained Models for Multi-modal
Embedding as a Service [19.916419258812077]
We propose a robust embedding watermarking method for languages called Marker.
To enhance the watermark, we propose a collaborative copyright verification strategy based on both backdoor trigger and embedding distribution.
arXiv Detail & Related papers (2023-11-10T04:27:27Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Are You Copying My Model? Protecting the Copyright of Large Language
Models for EaaS via Backdoor Watermark [58.60940048748815]
Companies have begun to offer Embedding as a Service (E) based on large language models (LLMs)
E is vulnerable to model extraction attacks, which can cause significant losses for the owners of LLMs.
We propose an Embedding Watermark method called EmbMarker that implants backdoors on embeddings.
arXiv Detail & Related papers (2023-05-17T08:28:54Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.