An Accurate, Scalable and Verifiable Protocol for Federated
Differentially Private Averaging
- URL: http://arxiv.org/abs/2006.07218v3
- Date: Fri, 28 Oct 2022 14:36:46 GMT
- Title: An Accurate, Scalable and Verifiable Protocol for Federated
Differentially Private Averaging
- Authors: C\'esar Sabater, Aur\'elien Bellet, Jan Ramon
- Abstract summary: We tackle challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence of malicious parties.
Our first contribution is a scalable protocol in which participants exchange correlated Gaussian noise along the edges of a network graph.
Our second contribution enables users to prove the correctness of their computations without compromising the efficiency and privacy guarantees of the protocol.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from data owned by several parties, as in federated learning, raises
challenges regarding the privacy guarantees provided to participants and the
correctness of the computation in the presence of malicious parties. We tackle
these challenges in the context of distributed averaging, an essential building
block of federated learning algorithms. Our first contribution is a scalable
protocol in which participants exchange correlated Gaussian noise along the
edges of a network graph, complemented by independent noise added by each
party. We analyze the differential privacy guarantees of our protocol and the
impact of the graph topology under colluding malicious parties, showing that we
can nearly match the utility of the trusted curator model even when each honest
party communicates with only a logarithmic number of other parties chosen at
random. This is in contrast with protocols in the local model of privacy (with
lower utility) or based on secure aggregation (where all pairs of users need to
exchange messages). Our second contribution enables users to prove the
correctness of their computations without compromising the efficiency and
privacy guarantees of the protocol. Our verification protocol relies on
standard cryptographic primitives like commitment schemes and zero knowledge
proofs.
Related papers
- Differential Privacy on Trust Graphs [54.55190841518906]
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data.
We give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP.
arXiv Detail & Related papers (2024-10-15T20:31:04Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Practical, Private Assurance of the Value of Collaboration via Fully Homomorphic Encryption [3.929854470352013]
Two parties wish to collaborate on their datasets.
One party is promised an improvement on its prediction model by incorporating data from the other party.
The parties would only wish to collaborate further if the updated model shows an improvement in accuracy.
arXiv Detail & Related papers (2023-10-04T03:47:21Z) - Trustless Privacy-Preserving Data Aggregation on Ethereum with Hypercube Network Topology [0.0]
We have proposed a scalable privacy-preserving data aggregation protocol for summation on the blockchain.
The protocol consists of four stages as contract deployment, user registration, private submission and proof verification.
arXiv Detail & Related papers (2023-08-29T12:51:26Z) - Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus [11.414398732656839]
We propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training.
We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches.
arXiv Detail & Related papers (2023-06-24T07:46:00Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Byzantine-Robust Federated Learning with Optimal Statistical Rates and
Privacy Guarantees [123.0401978870009]
We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates.
We benchmark against competing protocols and show the empirical superiority of the proposed protocols.
Our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server.
arXiv Detail & Related papers (2022-05-24T04:03:07Z) - PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting [3.822543555265593]
This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation and differential privacy.
PRICURE enables privacy-preserving collaborative prediction among multiple model owners.
We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets.
arXiv Detail & Related papers (2021-02-19T05:55:53Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.