TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data
- URL: http://arxiv.org/abs/2402.10816v1
- Date: Fri, 16 Feb 2024 16:41:14 GMT
- Title: TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data
- Authors: Richeng Jin, Yujie Gu, Kai Yue, Xiaofan He, Zhaoyang Zhang, Huaiyu Dai
- Abstract summary: We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
- Score: 50.797729676285876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed training of deep neural networks faces three critical challenges:
privacy preservation, communication efficiency, and robustness to fault and
adversarial behaviors. Although significant research efforts have been devoted
to addressing these challenges independently, their synthesis remains less
explored. In this paper, we propose TernaryVote, which combines a ternary
compressor and the majority vote mechanism to realize differential privacy,
gradient compression, and Byzantine resilience simultaneously. We theoretically
quantify the privacy guarantee through the lens of the emerging f-differential
privacy (DP) and the Byzantine resilience of the proposed algorithm.
Particularly, in terms of privacy guarantees, compared to the existing
sign-based approach StoSign, the proposed method improves the dimension
dependence on the gradient size and enjoys privacy amplification by mini-batch
sampling while ensuring a comparable convergence rate. We also prove that
TernaryVote is robust when less than 50% of workers are blind attackers, which
matches that of SIGNSGD with majority vote. Extensive experimental results
validate the effectiveness of the proposed algorithm.
Related papers
- Privacy-Preserving Distributed Maximum Consensus Without Accuracy Loss [8.539683760001575]
In distributed networks, calculating the maximum element is a fundamental task.
Despite its importance, privacy in distributed maximum consensus has received limited attention in the literature.
We propose a novel distributed optimization-based approach that preserves privacy without sacrificing accuracy.
arXiv Detail & Related papers (2024-09-16T12:21:04Z) - Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - Private Optimal Inventory Policy Learning for Feature-based Newsvendor with Unknown Demand [13.594765018457904]
This paper introduces a novel approach to estimate a privacy-preserving optimal inventory policy within the f-differential privacy framework.
We develop a clipped noisy gradient descent algorithm based on convolution smoothing for optimal inventory estimation.
Our numerical experiments demonstrate that the proposed new method can achieve desirable privacy protection with a marginal increase in cost.
arXiv Detail & Related papers (2024-04-23T19:15:43Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.