FedIPR: Ownership Verification for Federated Deep Neural Network Models
- URL: http://arxiv.org/abs/2109.13236v1
- Date: Mon, 27 Sep 2021 12:51:24 GMT
- Title: FedIPR: Ownership Verification for Federated Deep Neural Network Models
- Authors: Lixin Fan and Bowen Li and Hanlin Gu and Jie Li and Qiang Yang
- Abstract summary: Federated learning models must be protected against plagiarism since these models are built upon valuable training data owned by multiple institutions or people.
This paper illustrates a novel federated deep neural network (FedDNN) ownership verification scheme that allows ownership signatures to be embedded and verified to claim legitimate intellectual property rights (IPR) of FedDNN models.
- Score: 31.459374163080994
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning models must be protected against plagiarism since these
models are built upon valuable training data owned by multiple institutions or
people.This paper illustrates a novel federated deep neural network (FedDNN)
ownership verification scheme that allows ownership signatures to be embedded
and verified to claim legitimate intellectual property rights (IPR) of FedDNN
models, in case that models are illegally copied, re-distributed or misused.
The effectiveness of embedded ownership signatures is theoretically justified
by proved condition sunder which signatures can be embedded and detected by
multiple clients with-out disclosing private signatures. Extensive experimental
results on CIFAR10,CIFAR100 image datasets demonstrate that varying bit-lengths
signatures can be embedded and reliably detected without affecting models
classification performances. Signatures are also robust against removal attacks
including fine-tuning and pruning.
Related papers
- Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy
Leakage Fingerprints [16.564206424838485]
Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement.
We propose a novel ownership testing method called VeriDIP, which verifies a model's intellectual property.
arXiv Detail & Related papers (2023-09-07T01:58:12Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - FedRight: An Effective Model Copyright Protection for Federated Learning [3.387494280613737]
Federated learning (FL) implements model training and meanwhile protects local data privacy.
For the first time, we formalize the problem of copyright protection for FL.
We propose FedRight to protect model copyright based on model fingerprints.
arXiv Detail & Related papers (2023-03-18T11:47:54Z) - DeepHider: A Multi-module and Invisibility Watermarking Scheme for
Language Model [0.0]
This paper proposes a new threat of replacing the model classification module and performing global fine-tuning of the model.
We use the properties of blockchain such as tamper-proof and traceability to prevent the ownership statement of thieves.
Experiments show that the proposed scheme successfully verifies ownership with 100% watermark verification accuracy.
arXiv Detail & Related papers (2022-08-09T11:53:24Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN
Models [13.043683635373213]
Deep neural networks (DNNs) have achieved tremendous success in artificial intelligence (AI) fields.
DNN models can be easily illegally copied, redistributed, or abused by criminals.
arXiv Detail & Related papers (2022-06-06T12:12:47Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.