Communication Efficient Private Federated Learning Using Dithering
- URL: http://arxiv.org/abs/2309.07809v1
- Date: Thu, 14 Sep 2023 15:55:58 GMT
- Title: Communication Efficient Private Federated Learning Using Dithering
- Authors: Burak Hasircioglu, Deniz Gunduz
- Abstract summary: We show that employing a quantization scheme based on subtractive dithering at the clients can effectively replicate the normal noise addition process at the aggregator.
This implies we can guarantee the same level of differential privacy against other clients while substantially reducing the amount of communication required.
- Score: 2.5155469469412877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of preserving privacy while ensuring efficient communication is a
fundamental challenge in federated learning. In this work, we tackle this
challenge in the trusted aggregator model, and propose a solution that achieves
both objectives simultaneously. We show that employing a quantization scheme
based on subtractive dithering at the clients can effectively replicate the
normal noise addition process at the aggregator. This implies that we can
guarantee the same level of differential privacy against other clients while
substantially reducing the amount of communication required, as opposed to
transmitting full precision gradients and using central noise addition. We also
experimentally demonstrate that the accuracy of our proposed approach matches
that of the full precision gradient method.
Related papers
- Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning [50.382793324572845]
Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy.
In this paper, we analyze a new method that incorporates the ideas of using data similarity and clients sampling.
To address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method.
arXiv Detail & Related papers (2024-09-22T00:49:10Z) - Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - The Surprising Effectiveness of Skip-Tuning in Diffusion Sampling [78.6155095947769]
Skip-Tuning is a simple yet surprisingly effective training-free tuning method on the skip connections.
Our method can achieve 100% FID improvement for pretrained EDM on ImageNet 64 with only 19 NFEs (1.75)
While Skip-Tuning increases the score-matching losses in the pixel space, the losses in the feature space are reduced.
arXiv Detail & Related papers (2024-02-23T08:05:23Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Asynchronous Federated Learning with Bidirectional Quantized
Communications and Buffered Aggregation [39.057968279167966]
Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a state-of-the-art algorithm known for its efficiency and high scalability.
We present a new algorithm (QAFeL) with a quantization scheme that establishes a shared "hidden" state between the server and clients to avoid the error propagation caused by direct quantization.
arXiv Detail & Related papers (2023-08-01T03:50:58Z) - Killing Two Birds with One Stone: Quantization Achieves Privacy in
Distributed Learning [18.824571167583432]
Communication efficiency and privacy protection are critical issues in distributed machine learning.
We propose a comprehensive quantization-based solution that could simultaneously achieve communication efficiency and privacy protection.
We theoretically capture the new trade-offs between communication, privacy, and learning performance.
arXiv Detail & Related papers (2023-04-26T13:13:04Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - FedSKETCH: Communication-Efficient and Private Federated Learning via
Sketching [33.54413645276686]
Communication complexity and privacy are the two key challenges in Federated Learning.
We introduce FedSKETCH and FedSKETCHGATE algorithms to address both challenges in Federated learning jointly.
arXiv Detail & Related papers (2020-08-11T19:22:48Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z) - An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning [82.80836918594231]
Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
arXiv Detail & Related papers (2020-02-23T06:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.