Combining Differential Privacy and Byzantine Resilience in Distributed
SGD
- URL: http://arxiv.org/abs/2110.03991v4
- Date: Thu, 5 Oct 2023 09:03:58 GMT
- Title: Combining Differential Privacy and Byzantine Resilience in Distributed
SGD
- Authors: Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Sebastien Rouault, and
John Stephan
- Abstract summary: This paper studies the extent to which the distributed SGD algorithm, in the standard parameter-server architecture, can learn an accurate model.
We show that many existing results on the convergence of distributed SGD under Byzantine faults, especially those relying on $(alpha,f)$-Byzantine resilience, are rendered invalid when honest workers enforce DP.
- Score: 9.14589517827682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy and Byzantine resilience (BR) are two crucial requirements of
modern-day distributed machine learning. The two concepts have been extensively
studied individually but the question of how to combine them effectively
remains unanswered. This paper contributes to addressing this question by
studying the extent to which the distributed SGD algorithm, in the standard
parameter-server architecture, can learn an accurate model despite (a) a
fraction of the workers being malicious (Byzantine), and (b) the other
fraction, whilst being honest, providing noisy information to the server to
ensure differential privacy (DP). We first observe that the integration of
standard practices in DP and BR is not straightforward. In fact, we show that
many existing results on the convergence of distributed SGD under Byzantine
faults, especially those relying on $(\alpha,f)$-Byzantine resilience, are
rendered invalid when honest workers enforce DP. To circumvent this
shortcoming, we revisit the theory of $(\alpha,f)$-BR to obtain an approximate
convergence guarantee. Our analysis provides key insights on how to improve
this guarantee through hyperparameter optimization. Essentially, our
theoretical and empirical results show that (1) an imprudent combination of
standard approaches to DP and BR might be fruitless, but (2) by carefully
re-tuning the learning algorithm, we can obtain reasonable learning accuracy
while simultaneously guaranteeing DP and BR.
Related papers
- Augment then Smooth: Reconciling Differential Privacy with Certified Robustness [32.49465965847842]
We show that standard differentially private model training is insufficient for providing strong certified robustness guarantees.
We present DP-CERT, a simple and effective method that achieves both privacy and robustness guarantees simultaneously.
arXiv Detail & Related papers (2023-06-14T17:52:02Z) - Practical Differentially Private and Byzantine-resilient Federated
Learning [17.237219486602097]
We use our version of differentially private gradient descent (DP-SGD) algorithm to preserve privacy.
We leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks.
arXiv Detail & Related papers (2023-04-15T23:30:26Z) - Differentially-Private Bayes Consistency [70.92545332158217]
We construct a Bayes consistent learning rule that satisfies differential privacy (DP)
We prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal sample complexity.
arXiv Detail & Related papers (2022-12-08T11:57:30Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Bridging Differential Privacy and Byzantine-Robustness via Model
Aggregation [27.518542543750367]
This paper aims at addressing conflicting issues in federated learning: differential privacy and Byzantinerobustness.
Standard mechanisms add transmitted DP, envelops entangles with robust gradient aggregation to defend against Byzantine attacks.
We show that the influence of our proposed mechanisms is deperturbed with that robust model aggregation.
arXiv Detail & Related papers (2022-04-29T23:37:46Z) - Differential Privacy and Byzantine Resilience in SGD: Do They Add Up? [6.614755043607777]
We study whether a distributed implementation of the renowned Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and $(alpha,f)$-Byzantine resilience.
We show that a direct composition of these techniques makes the guarantees of the resulting SGD algorithm depend unfavourably upon the number of parameters in the ML model.
arXiv Detail & Related papers (2021-02-16T14:10:38Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.