FedRight: An Effective Model Copyright Protection for Federated Learning
- URL: http://arxiv.org/abs/2303.10399v1
- Date: Sat, 18 Mar 2023 11:47:54 GMT
- Title: FedRight: An Effective Model Copyright Protection for Federated Learning
- Authors: Jinyin Chen, Mingjun Li, Mingjun Li, Haibin Zheng
- Abstract summary: Federated learning (FL) implements model training and meanwhile protects local data privacy.
For the first time, we formalize the problem of copyright protection for FL.
We propose FedRight to protect model copyright based on model fingerprints.
- Score: 3.387494280613737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), an effective distributed machine learning framework,
implements model training and meanwhile protects local data privacy. It has
been applied to a broad variety of practice areas due to its great performance
and appreciable profits. Who owns the model, and how to protect the copyright
has become a real problem. Intuitively, the existing property rights protection
methods in centralized scenarios (e.g., watermark embedding and model
fingerprints) are possible solutions for FL. But they are still challenged by
the distributed nature of FL in aspects of the no data sharing, parameter
aggregation, and federated training settings. For the first time, we formalize
the problem of copyright protection for FL, and propose FedRight to protect
model copyright based on model fingerprints, i.e., extracting model features by
generating adversarial examples as model fingerprints. FedRight outperforms
previous works in four key aspects: (i) Validity: it extracts model features to
generate transferable fingerprints to train a detector to verify the copyright
of the model. (ii) Fidelity: it is with imperceptible impact on the federated
training, thus promising good main task performance. (iii) Robustness: it is
empirically robust against malicious attacks on copyright protection, i.e.,
fine-tuning, model pruning, and adaptive attacks. (iv) Black-box: it is valid
in the black-box forensic scenario where only application programming interface
calls to the model are available. Extensive evaluations across 3 datasets and 9
model structures demonstrate FedRight's superior fidelity, validity, and
robustness.
Related papers
- REEF: Representation Encoding Fingerprints for Large Language Models [53.679712605506715]
REEF computes and compares the centered kernel alignment similarity between the representations of a suspect model and a victim model.
This training-free REEF does not impair the model's general capabilities and is robust to sequential fine-tuning, pruning, model merging, and permutations.
arXiv Detail & Related papers (2024-10-18T08:27:02Z) - EncryIP: A Practical Encryption-Based Framework for Model Intellectual
Property Protection [17.655627250882805]
This paper introduces a practical encryption-based framework called textitEncryIP.
It seamlessly integrates a public-key encryption scheme into the model learning process.
It demonstrates superior effectiveness in both training protected models and efficiently detecting the unauthorized spread of ML models.
arXiv Detail & Related papers (2023-12-19T11:11:03Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - Foundation Models and Fair Use [96.04664748698103]
In the U.S. and other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine.
In this work, we survey the potential risks of developing and deploying foundation models based on copyrighted content.
We discuss technical mitigations that can help foundation models stay in line with fair use.
arXiv Detail & Related papers (2023-03-28T03:58:40Z) - FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model [33.03362469978148]
Federated learning (FL) is a distributed machine learning paradigm allowing multiple clients to collaboratively train a global model without sharing their local data.
This poses a risk of unauthorized model distribution or resale by the malicious client, compromising the intellectual property rights of the FL group.
We present FedTracker, the first FL model protection framework that provides both ownership verification and traceability.
arXiv Detail & Related papers (2022-11-14T07:40:35Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - Defending against Model Stealing via Verifying Embedded External
Features [90.29429679125508]
adversaries can steal' deployed models even when they have no training samples and can not get access to the model parameters or structures.
We explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified emphexternal features.
Our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process.
arXiv Detail & Related papers (2021-12-07T03:51:54Z) - FedIPR: Ownership Verification for Federated Deep Neural Network Models [31.459374163080994]
Federated learning models must be protected against plagiarism since these models are built upon valuable training data owned by multiple institutions or people.
This paper illustrates a novel federated deep neural network (FedDNN) ownership verification scheme that allows ownership signatures to be embedded and verified to claim legitimate intellectual property rights (IPR) of FedDNN models.
arXiv Detail & Related papers (2021-09-27T12:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.