FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature
- URL: http://arxiv.org/abs/2305.06085v1
- Date: Wed, 10 May 2023 12:10:02 GMT
- Title: FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature
- Authors: Wenyuan Yang, Gongxi Zhu, Yuguo Yin, Hanlin Gu, Lixin Fan, Qiang Yang,
Xiaochun Cao
- Abstract summary: Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
- Score: 60.99054146321459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows multiple parties to collaborate in learning a
global model without revealing private data. The high cost of training and the
significant value of the global model necessitates the need for ownership
verification of federated learning. However, the existing ownership
verification schemes in federated learning suffer from several limitations,
such as inadequate support for a large number of clients and vulnerability to
ambiguity attacks. To address these limitations, we propose a cryptographic
signature-based federated learning model ownership verification scheme named
FedSOV. FedSOV allows numerous clients to embed their ownership credentials and
verify ownership using unforgeable digital signatures. The scheme provides
theoretical resistance to ambiguity attacks with the unforgeability of the
signature. Experimental results on computer vision and natural language
processing tasks demonstrate that FedSOV is an effective federated model
ownership verification scheme enhanced with provable cryptographic security.
Related papers
- Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - FedZKP: Federated Model Ownership Verification with Zero-knowledge Proof [60.990541463214605]
Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
We propose a provable secure model ownership verification scheme using zero-knowledge proof, named FedZKP.
arXiv Detail & Related papers (2023-05-08T07:03:33Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - CATFL: Certificateless Authentication-based Trustworthy Federated
Learning for 6G Semantic Communications [12.635921154497987]
Federated learning (FL) provides an emerging approach for collaboratively training semantic encoder/decoder models of semantic communication systems.
Most existing studies on trustworthy FL aim to eliminate data poisoning threats that are produced by malicious clients.
A certificateless authentication-based trustworthy federated learning framework is proposed, which mutually authenticates the identity of clients and server.
arXiv Detail & Related papers (2023-02-01T06:26:44Z) - FedTracker: Furnishing Ownership Verification and Traceability for
Federated Learning Model [33.03362469978148]
Federated learning (FL) is a distributed machine learning paradigm allowing multiple clients to collaboratively train a global model without sharing their local data.
This poses a risk of unauthorized model distribution or resale by the malicious client, compromising the intellectual property rights of the FL group.
We present FedTracker, the first FL model protection framework that provides both ownership verification and traceability.
arXiv Detail & Related papers (2022-11-14T07:40:35Z) - FedIPR: Ownership Verification for Federated Deep Neural Network Models [31.459374163080994]
Federated learning models must be protected against plagiarism since these models are built upon valuable training data owned by multiple institutions or people.
This paper illustrates a novel federated deep neural network (FedDNN) ownership verification scheme that allows ownership signatures to be embedded and verified to claim legitimate intellectual property rights (IPR) of FedDNN models.
arXiv Detail & Related papers (2021-09-27T12:51:24Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.