FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model
- URL: http://arxiv.org/abs/2211.07160v3
- Date: Sat, 2 Mar 2024 11:52:16 GMT
- Title: FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model
- Authors: Shuo Shao, Wenyuan Yang, Hanlin Gu, Zhan Qin, Lixin Fan, Qiang Yang
and Kui Ren
- Abstract summary: Federated learning (FL) is a distributed machine learning paradigm allowing multiple clients to collaboratively train a global model without sharing their local data.
This poses a risk of unauthorized model distribution or resale by the malicious client, compromising the intellectual property rights of the FL group.
We present FedTracker, the first FL model protection framework that provides both ownership verification and traceability.
- Score: 33.03362469978148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed machine learning paradigm allowing
multiple clients to collaboratively train a global model without sharing their
local data. However, FL entails exposing the model to various participants.
This poses a risk of unauthorized model distribution or resale by the malicious
client, compromising the intellectual property rights of the FL group. To deter
such misbehavior, it is essential to establish a mechanism for verifying the
ownership of the model and as well tracing its origin to the leaker among the
FL participants. In this paper, we present FedTracker, the first FL model
protection framework that provides both ownership verification and
traceability. FedTracker adopts a bi-level protection scheme consisting of
global watermark mechanism and local fingerprint mechanism. The former
authenticates the ownership of the global model, while the latter identifies
which client the model is derived from. FedTracker leverages Continual Learning
(CL) principles to embed the watermark in a way that preserves the utility of
the FL model on both primitive task and watermark task. FedTracker also devises
a novel metric to better discriminate different fingerprints. Experimental
results show FedTracker is effective in ownership verification, traceability,
and maintains good fidelity and robustness against various watermark removal
attacks.
Related papers
- RobWE: Robust Watermark Embedding for Personalized Federated Learning
Model Ownership Protection [29.48484160966728]
This paper presents a robust watermark embedding scheme, named RobWE, to protect the ownership of personalized models in PFL.
We first decouple the watermark embedding of personalized models into two parts: head layer embedding and representation layer embedding.
For representation layer embedding, we employ a watermark slice embedding operation, which avoids watermark embedding conflicts.
arXiv Detail & Related papers (2024-02-29T11:31:50Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - FedRight: An Effective Model Copyright Protection for Federated Learning [3.387494280613737]
Federated learning (FL) implements model training and meanwhile protects local data privacy.
For the first time, we formalize the problem of copyright protection for FL.
We propose FedRight to protect model copyright based on model fingerprints.
arXiv Detail & Related papers (2023-03-18T11:47:54Z) - Watermarking in Secure Federated Learning: A Verification Framework
Based on Client-Side Backdooring [13.936013200707508]
Federated learning (FL) allows multiple participants to collaboratively build deep learning (DL) models without directly sharing data.
The issue of copyright protection in FL becomes important since unreliable participants may gain access to the jointly trained model.
We propose a novel client-side FL watermarking scheme to tackle the copyright protection issue in secure FL with HE.
arXiv Detail & Related papers (2022-11-14T06:37:01Z) - VeriFi: Towards Verifiable Federated Unlearning [59.169431326438676]
Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
arXiv Detail & Related papers (2022-05-25T12:20:02Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.