RobWE: Robust Watermark Embedding for Personalized Federated Learning
Model Ownership Protection
- URL: http://arxiv.org/abs/2402.19054v1
- Date: Thu, 29 Feb 2024 11:31:50 GMT
- Title: RobWE: Robust Watermark Embedding for Personalized Federated Learning
Model Ownership Protection
- Authors: Yang Xu, Yunlin Tan, Cheng Zhang, Kai Chi, Peng Sun, Wenyuan Yang, Ju
Ren, Hongbo Jiang, Yaoxue Zhang
- Abstract summary: This paper presents a robust watermark embedding scheme, named RobWE, to protect the ownership of personalized models in PFL.
We first decouple the watermark embedding of personalized models into two parts: head layer embedding and representation layer embedding.
For representation layer embedding, we employ a watermark slice embedding operation, which avoids watermark embedding conflicts.
- Score: 29.48484160966728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding watermarks into models has been widely used to protect model
ownership in federated learning (FL). However, existing methods are inadequate
for protecting the ownership of personalized models acquired by clients in
personalized FL (PFL). This is due to the aggregation of the global model in
PFL, resulting in conflicts over clients' private watermarks. Moreover,
malicious clients may tamper with embedded watermarks to facilitate model
leakage and evade accountability. This paper presents a robust watermark
embedding scheme, named RobWE, to protect the ownership of personalized models
in PFL. We first decouple the watermark embedding of personalized models into
two parts: head layer embedding and representation layer embedding. The head
layer belongs to clients' private part without participating in model
aggregation, while the representation layer is the shared part for aggregation.
For representation layer embedding, we employ a watermark slice embedding
operation, which avoids watermark embedding conflicts. Furthermore, we design a
malicious watermark detection scheme enabling the server to verify the
correctness of watermarks before aggregating local models. We conduct an
exhaustive experimental evaluation of RobWE. The results demonstrate that RobWE
significantly outperforms the state-of-the-art watermark embedding schemes in
FL in terms of fidelity, reliability, and robustness.
Related papers
- AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - A Watermark-Conditioned Diffusion Model for IP Protection [31.969286898467985]
We propose a unified watermarking framework for content copyright protection within the context of diffusion models.
To tackle this challenge, we propose a Watermark-conditioned Diffusion model called WaDiff.
Our method is effective and robust in both the detection and owner identification tasks.
arXiv Detail & Related papers (2024-03-16T11:08:15Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - Unbiased Watermark for Large Language Models [67.43415395591221]
This study examines how significantly watermarks impact the quality of model-generated outputs.
It is possible to integrate watermarks without affecting the output probability distribution.
The presence of watermarks does not compromise the performance of the model in downstream tasks.
arXiv Detail & Related papers (2023-09-22T12:46:38Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model [33.03362469978148]
Federated learning (FL) is a distributed machine learning paradigm allowing multiple clients to collaboratively train a global model without sharing their local data.
This poses a risk of unauthorized model distribution or resale by the malicious client, compromising the intellectual property rights of the FL group.
We present FedTracker, the first FL model protection framework that provides both ownership verification and traceability.
arXiv Detail & Related papers (2022-11-14T07:40:35Z) - Watermarking in Secure Federated Learning: A Verification Framework
Based on Client-Side Backdooring [13.936013200707508]
Federated learning (FL) allows multiple participants to collaboratively build deep learning (DL) models without directly sharing data.
The issue of copyright protection in FL becomes important since unreliable participants may gain access to the jointly trained model.
We propose a novel client-side FL watermarking scheme to tackle the copyright protection issue in secure FL with HE.
arXiv Detail & Related papers (2022-11-14T06:37:01Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.