Bridging Differential Privacy and Byzantine-Robustness via Model
Aggregation
- URL: http://arxiv.org/abs/2205.00107v1
- Date: Fri, 29 Apr 2022 23:37:46 GMT
- Title: Bridging Differential Privacy and Byzantine-Robustness via Model
Aggregation
- Authors: Heng Zhu, Qing Ling
- Abstract summary: This paper aims at addressing conflicting issues in federated learning: differential privacy and Byzantinerobustness.
Standard mechanisms add transmitted DP, envelops entangles with robust gradient aggregation to defend against Byzantine attacks.
We show that the influence of our proposed mechanisms is deperturbed with that robust model aggregation.
- Score: 27.518542543750367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims at jointly addressing two seemly conflicting issues in
federated learning: differential privacy (DP) and Byzantine-robustness, which
are particularly challenging when the distributed data are non-i.i.d.
(independent and identically distributed). The standard DP mechanisms add noise
to the transmitted messages, and entangles with robust stochastic gradient
aggregation to defend against Byzantine attacks. In this paper, we decouple the
two issues via robust stochastic model aggregation, in the sense that our
proposed DP mechanisms and the defense against Byzantine attacks have separated
influence on the learning performance. Leveraging robust stochastic model
aggregation, at each iteration, each worker calculates the difference between
the local model and the global one, followed by sending the element-wise signs
to the master node, which enables robustness to Byzantine attacks. Further, we
design two DP mechanisms to perturb the uploaded signs for the purpose of
privacy preservation, and prove that they are $(\epsilon,0)$-DP by exploiting
the properties of noise distributions. With the tools of Moreau envelop and
proximal point projection, we establish the convergence of the proposed
algorithm when the cost function is nonconvex. We analyze the trade-off between
privacy preservation and learning performance, and show that the influence of
our proposed DP mechanisms is decoupled with that of robust stochastic model
aggregation. Numerical experiments demonstrate the effectiveness of the
proposed algorithm.
Related papers
- Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians [60.22542847840578]
Despite advances in adversarial machine learning, inference for Gaussian models in the presence of an adversary is notably understudied.
We consider a self-interested attacker who wishes to disrupt a decisionmaker's conditional inference and subsequent actions by corrupting a set of evidentiary variables.
To avoid detection, the attacker also desires the attack to appear plausible wherein plausibility is determined by the density of the corrupted evidence.
arXiv Detail & Related papers (2024-11-21T17:46:55Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Distributional Shift-Aware Off-Policy Interval Estimation: A Unified
Error Quantification Framework [8.572441599469597]
We study high-confidence off-policy evaluation in the context of infinite-horizon Markov decision processes.
The objective is to establish a confidence interval (CI) for the target policy value using only offline data pre-collected from unknown behavior policies.
We show that our algorithm is sample-efficient, error-robust, and provably convergent even in non-linear function approximation settings.
arXiv Detail & Related papers (2023-09-23T06:35:44Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - On the Privacy-Robustness-Utility Trilemma in Distributed Learning [7.778461949427662]
We present the first tight analysis of the error incurred by any algorithm ensuring robustness against a fraction of adversarial machines.
Our analysis exhibits a fundamental trade-off between privacy, robustness, and utility.
arXiv Detail & Related papers (2023-02-09T17:24:18Z) - Differentially Private Counterfactuals via Functional Mechanism [47.606474009932825]
We propose a novel framework to generate differentially private counterfactual (DPC) without touching the deployed model or explanation set.
In particular, we train an autoencoder with the functional mechanism to construct noisy class prototypes, and then derive the DPC from the latent prototypes.
arXiv Detail & Related papers (2022-08-04T20:31:22Z) - Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
Learning [26.783944764936994]
We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms.
We design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights.
We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing.
arXiv Detail & Related papers (2022-05-25T01:33:52Z) - Combining Differential Privacy and Byzantine Resilience in Distributed
SGD [9.14589517827682]
This paper studies the extent to which the distributed SGD algorithm, in the standard parameter-server architecture, can learn an accurate model.
We show that many existing results on the convergence of distributed SGD under Byzantine faults, especially those relying on $(alpha,f)$-Byzantine resilience, are rendered invalid when honest workers enforce DP.
arXiv Detail & Related papers (2021-10-08T09:23:03Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.