Practical Differentially Private and Byzantine-resilient Federated
Learning
- URL: http://arxiv.org/abs/2304.09762v1
- Date: Sat, 15 Apr 2023 23:30:26 GMT
- Title: Practical Differentially Private and Byzantine-resilient Federated
Learning
- Authors: Zihang Xiang, Tianhao Wang, Wanyu Lin, Di Wang
- Abstract summary: We use our version of differentially private gradient descent (DP-SGD) algorithm to preserve privacy.
We leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.
- Score: 17.237219486602097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy and Byzantine resilience are two indispensable requirements for a
federated learning (FL) system. Although there have been extensive studies on
privacy and Byzantine security in their own track, solutions that consider both
remain sparse. This is due to difficulties in reconciling privacy-preserving
and Byzantine-resilient algorithms.
In this work, we propose a solution to such a two-fold issue. We use our
version of differentially private stochastic gradient descent (DP-SGD)
algorithm to preserve privacy and then apply our Byzantine-resilient
algorithms. We note that while existing works follow this general approach, an
in-depth analysis on the interplay between DP and Byzantine resilience has been
ignored, leading to unsatisfactory performance. Specifically, for the random
noise introduced by DP, previous works strive to reduce its impact on the
Byzantine aggregation. In contrast, we leverage the random noise to construct
an aggregation that effectively rejects many existing Byzantine attacks.
We provide both theoretical proof and empirical experiments to show our
protocol is effective: retaining high accuracy while preserving the DP
guarantee and Byzantine resilience. Compared with the previous work, our
protocol 1) achieves significantly higher accuracy even in a high privacy
regime; 2) works well even when up to 90% of distributive workers are
Byzantine.
Related papers
- Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Byzantine-Robust Federated Learning with Variance Reduction and
Differential Privacy [6.343100139647636]
Federated learning (FL) is designed to preserve data privacy during model training.
FL is vulnerable to privacy attacks and Byzantine attacks.
We propose a new FL scheme that guarantees rigorous privacy and simultaneously enhances system robustness against Byzantine attacks.
arXiv Detail & Related papers (2023-09-07T01:39:02Z) - On the Tradeoff between Privacy Preservation and Byzantine-Robustness in Decentralized Learning [27.06136955053105]
In a decentralized network, honest-but-curious agents faithfully follow the prescribed algorithm, but expect to infer their neighbors' private data from messages received during the learning process.
In a decentralized network, dishonest-and-Byzantine agents disobey the prescribed algorithm, and deliberately disseminate wrong messages to their neighbors so as to bias the learning process.
arXiv Detail & Related papers (2023-08-28T14:20:53Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Bridging Differential Privacy and Byzantine-Robustness via Model
Aggregation [27.518542543750367]
This paper aims at addressing conflicting issues in federated learning: differential privacy and Byzantinerobustness.
Standard mechanisms add transmitted DP, envelops entangles with robust gradient aggregation to defend against Byzantine attacks.
We show that the influence of our proposed mechanisms is deperturbed with that robust model aggregation.
arXiv Detail & Related papers (2022-04-29T23:37:46Z) - Combining Differential Privacy and Byzantine Resilience in Distributed
SGD [9.14589517827682]
This paper studies the extent to which the distributed SGD algorithm, in the standard parameter-server architecture, can learn an accurate model.
We show that many existing results on the convergence of distributed SGD under Byzantine faults, especially those relying on $(alpha,f)$-Byzantine resilience, are rendered invalid when honest workers enforce DP.
arXiv Detail & Related papers (2021-10-08T09:23:03Z) - Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? [6.614755043607777]
We study whether a distributed implementation of the renowned Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and $(alpha,f)$-Byzantine resilience.
We show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters in the ML model.
arXiv Detail & Related papers (2021-02-16T14:10:38Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with
Latent Confounders [62.54431888432302]
We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders.
We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data.
arXiv Detail & Related papers (2020-07-27T22:19:01Z) - Federated Variance-Reduced Stochastic Gradient Descent with Robustness
to Byzantine Attacks [74.36161581953658]
This paper deals with distributed finite-sum optimization for learning over networks in the presence of malicious Byzantine attacks.
To cope with such attacks, most resilient approaches so far combine gradient descent (SGD) with different robust aggregation rules.
The present work puts forth a Byzantine attack resilient distributed (Byrd-) SAGA approach for learning tasks involving finite-sum optimization over networks.
arXiv Detail & Related papers (2019-12-29T19:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.