On the Tradeoff between Privacy Preservation and Byzantine-Robustness in
Decentralized Learning
- URL: http://arxiv.org/abs/2308.14606v3
- Date: Wed, 20 Dec 2023 06:26:36 GMT
- Title: On the Tradeoff between Privacy Preservation and Byzantine-Robustness in
Decentralized Learning
- Authors: Haoxiang Ye, Heng Zhu, and Qing Ling
- Abstract summary: In a decentralized network, honest-but-curious agents faithfully follow the prescribed algorithm, but expect to infer their neighbors' private data from messages received during the learning process.
In a decentralized network, dishonest-and-Byzantine agents disobey the prescribed algorithm, and deliberately disseminate wrong messages to their neighbors so as to bias the learning process.
- Score: 27.06136955053105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper jointly considers privacy preservation and Byzantine-robustness in
decentralized learning. In a decentralized network, honest-but-curious agents
faithfully follow the prescribed algorithm, but expect to infer their
neighbors' private data from messages received during the learning process,
while dishonest-and-Byzantine agents disobey the prescribed algorithm, and
deliberately disseminate wrong messages to their neighbors so as to bias the
learning process. For this novel setting, we investigate a generic
privacy-preserving and Byzantine-robust decentralized stochastic gradient
descent (SGD) framework, in which Gaussian noise is injected to preserve
privacy and robust aggregation rules are adopted to counteract Byzantine
attacks. We analyze its learning error and privacy guarantee, discovering an
essential tradeoff between privacy preservation and Byzantine-robustness in
decentralized learning -- the learning error caused by defending against
Byzantine attacks is exacerbated by the Gaussian noise added to preserve
privacy. For a class of state-of-the-art robust aggregation rules, we give
unified analysis of the "mixing abilities". Building upon this analysis, we
reveal how the "mixing abilities" affect the tradeoff between privacy
preservation and Byzantine-robustness. The theoretical results provide
guidelines for achieving a favorable tradeoff with proper design of robust
aggregation rules. Numerical experiments are conducted and corroborate our
theoretical findings.
Related papers
- TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Practical Differentially Private and Byzantine-resilient Federated
Learning [17.237219486602097]
We use our version of differentially private gradient descent (DP-SGD) algorithm to preserve privacy.
We leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.
arXiv Detail & Related papers (2023-04-15T23:30:26Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Privacy-Preserving Distributed Expectation Maximization for Gaussian
Mixture Model using Subspace Perturbation [4.2698418800007865]
federated learning is motivated by the privacy concern as it does not allow to transmit private data but only intermediate updates.
We propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each step.
Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
arXiv Detail & Related papers (2022-09-16T09:58:03Z) - Bridging Differential Privacy and Byzantine-Robustness via Model
Aggregation [27.518542543750367]
This paper aims at addressing conflicting issues in federated learning: differential privacy and Byzantinerobustness.
Standard mechanisms add transmitted DP, envelops entangles with robust gradient aggregation to defend against Byzantine attacks.
We show that the influence of our proposed mechanisms is deperturbed with that robust model aggregation.
arXiv Detail & Related papers (2022-04-29T23:37:46Z) - Secure Byzantine-Robust Distributed Learning via Clustering [16.85310886805588]
Federated learning systems that jointly preserve Byzantine robustness and privacy have remained an open problem.
We propose SHARE, a distributed learning framework designed to cryptographically preserve client update privacy and robustness to Byzantine adversaries simultaneously.
arXiv Detail & Related papers (2021-10-06T17:40:26Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.