FedZKP: Federated Model Ownership Verification with Zero-knowledge Proof
- URL: http://arxiv.org/abs/2305.04507v2
- Date: Wed, 10 May 2023 03:51:26 GMT
- Title: FedZKP: Federated Model Ownership Verification with Zero-knowledge Proof
- Authors: Wenyuan Yang, Yuguo Yin, Gongxi Zhu, Hanlin Gu, Lixin Fan, Xiaochun
Cao, Qiang Yang
- Abstract summary: Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
We propose a provable secure model ownership verification scheme using zero-knowledge proof, named FedZKP.
- Score: 60.990541463214605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) allows multiple parties to cooperatively learn a
federated model without sharing private data with each other. The need of
protecting such federated models from being plagiarized or misused, therefore,
motivates us to propose a provable secure model ownership verification scheme
using zero-knowledge proof, named FedZKP. It is shown that the FedZKP scheme
without disclosing credentials is guaranteed to defeat a variety of existing
and potential attacks. Both theoretical analysis and empirical studies
demonstrate the security of FedZKP in the sense that the probability for
attackers to breach the proposed FedZKP is negligible. Moreover, extensive
experimental results confirm the fidelity and robustness of our scheme.
Related papers
- Confidence Aware Learning for Reliable Face Anti-spoofing [52.23271636362843]
We propose a Confidence Aware Face Anti-spoofing model, which is aware of its capability boundary.
We estimate its confidence during the prediction of each sample.
Experiments show that the proposed CA-FAS can effectively recognize samples with low prediction confidence.
arXiv Detail & Related papers (2024-11-02T14:29:02Z) - TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Federated Learning on Riemannian Manifolds with Differential Privacy [8.75592575216789]
A malicious adversary can potentially infer sensitive information through various means.
We propose a generic private FL framework defined on the differential privacy (DP) technique.
We analyze the privacy guarantee while establishing the convergence properties.
Numerical simulations are performed on synthetic and real-world datasets to showcase the efficacy of the proposed PriRFed approach.
arXiv Detail & Related papers (2024-04-15T12:32:20Z) - Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Fault-Tolerant Federated Reinforcement Learning with Theoretical
Guarantee [25.555844784263236]
We propose the first Federated Reinforcement Learning framework that is tolerant to less than half of the participating agents being random system failures or adversarial attackers.
All theoretical results are empirically verified on various RL benchmark tasks.
arXiv Detail & Related papers (2021-10-26T23:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.