Watermarking in Secure Federated Learning: A Verification Framework
Based on Client-Side Backdooring
- URL: http://arxiv.org/abs/2211.07138v1
- Date: Mon, 14 Nov 2022 06:37:01 GMT
- Title: Watermarking in Secure Federated Learning: A Verification Framework
Based on Client-Side Backdooring
- Authors: Wenyuan Yang, Shuo Shao, Yue Yang, Xiyao Liu, Zhihua Xia, Gerald
Schaefer and Hui Fang
- Abstract summary: Federated learning (FL) allows multiple participants to collaboratively build deep learning (DL) models without directly sharing data.
The issue of copyright protection in FL becomes important since unreliable participants may gain access to the jointly trained model.
We propose a novel client-side FL watermarking scheme to tackle the copyright protection issue in secure FL with HE.
- Score: 13.936013200707508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) allows multiple participants to collaboratively build
deep learning (DL) models without directly sharing data. Consequently, the
issue of copyright protection in FL becomes important since unreliable
participants may gain access to the jointly trained model. Application of
homomorphic encryption (HE) in secure FL framework prevents the central server
from accessing plaintext models. Thus, it is no longer feasible to embed the
watermark at the central server using existing watermarking schemes. In this
paper, we propose a novel client-side FL watermarking scheme to tackle the
copyright protection issue in secure FL with HE. To our best knowledge, it is
the first scheme to embed the watermark to models under the Secure FL
environment. We design a black-box watermarking scheme based on client-side
backdooring to embed a pre-designed trigger set into an FL model by a
gradient-enhanced embedding method. Additionally, we propose a trigger set
construction mechanism to ensure the watermark cannot be forged. Experimental
results demonstrate that our proposed scheme delivers outstanding protection
performance and robustness against various watermark removal attacks and
ambiguity attack.
Related papers
- ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark [50.08021440235581]
Embeds as a Service (Eding) is emerging as a crucial role in AI applications.
Eding is vulnerable to model extraction attacks, highlighting the urgent need for copyright protection.
We propose a novel embedding-specific watermarking (ESpeW) mechanism to offer robust copyright protection for Eding.
arXiv Detail & Related papers (2024-10-23T04:34:49Z) - AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - DIP-Watermark: A Double Identity Protection Method Based on Robust Adversarial Watermark [13.007649270429493]
Face Recognition (FR) systems pose privacy risks.
One countermeasure is adversarial attack, deceiving unauthorized malicious FR.
We propose the first double identity protection scheme based on traceable adversarial watermarking.
arXiv Detail & Related papers (2024-04-23T02:50:38Z) - DeepEclipse: How to Break White-Box DNN-Watermarking Schemes [60.472676088146436]
We present obfuscation techniques that significantly differ from the existing white-box watermarking removal schemes.
DeepEclipse can evade watermark detection without prior knowledge of the underlying watermarking scheme.
Our evaluation reveals that DeepEclipse excels in breaking multiple white-box watermarking schemes.
arXiv Detail & Related papers (2024-03-06T10:24:47Z) - RobWE: Robust Watermark Embedding for Personalized Federated Learning
Model Ownership Protection [29.48484160966728]
This paper presents a robust watermark embedding scheme, named RobWE, to protect the ownership of personalized models in PFL.
We first decouple the watermark embedding of personalized models into two parts: head layer embedding and representation layer embedding.
For representation layer embedding, we employ a watermark slice embedding operation, which avoids watermark embedding conflicts.
arXiv Detail & Related papers (2024-02-29T11:31:50Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model [33.03362469978148]
Federated learning (FL) is a distributed machine learning paradigm allowing multiple clients to collaboratively train a global model without sharing their local data.
This poses a risk of unauthorized model distribution or resale by the malicious client, compromising the intellectual property rights of the FL group.
We present FedTracker, the first FL model protection framework that provides both ownership verification and traceability.
arXiv Detail & Related papers (2022-11-14T07:40:35Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.