SoteriaFL: A Unified Framework for Private Federated Learning with
Communication Compression
- URL: http://arxiv.org/abs/2206.09888v1
- Date: Mon, 20 Jun 2022 16:47:58 GMT
- Title: SoteriaFL: A Unified Framework for Private Federated Learning with
Communication Compression
- Authors: Zhize Li, Haoyu Zhao, Boyue Li, Yuejie Chi
- Abstract summary: We propose a unified framework that enhances the communication efficiency of private federated learning with communication compression.
We provide a comprehensive characterization of its performance trade-offs in terms of privacy, utility, and communication complexity.
- Score: 40.646108010388986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To enable large-scale machine learning in bandwidth-hungry environments such
as wireless networks, significant progress has been made recently in designing
communication-efficient federated learning algorithms with the aid of
communication compression. On the other end, privacy-preserving, especially at
the client level, is another important desideratum that has not been addressed
simultaneously in the presence of advanced communication compression techniques
yet. In this paper, we propose a unified framework that enhances the
communication efficiency of private federated learning with communication
compression. Exploiting both general compression operators and local
differential privacy, we first examine a simple algorithm that applies
compression directly to differentially-private stochastic gradient descent, and
identify its limitations. We then propose a unified framework SoteriaFL for
private federated learning, which accommodates a general family of local
gradient estimators including popular stochastic variance-reduced gradient
methods and the state-of-the-art shifted compression scheme. We provide a
comprehensive characterization of its performance trade-offs in terms of
privacy, utility, and communication complexity, where SoteraFL is shown to
achieve better communication complexity without sacrificing privacy nor utility
than other private federated learning algorithms without communication
compression.
Related papers
- Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models [56.21666819468249]
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server.
We introduce FedComLoc, integrating practical and effective compression into emphScaffnew to further enhance communication efficiency.
arXiv Detail & Related papers (2024-03-14T22:29:59Z) - Private Federated Learning with Autotuned Compression [44.295638792312694]
We propose new techniques for reducing communication in private federated learning without the need for setting or tuning compression rates.
Our on-the-fly methods automatically adjust the compression rate based on the error induced during training, while maintaining provable privacy guarantees.
We demonstrate the effectiveness of our approach on real-world datasets by achieving favorable compression rates without the need for tuning.
arXiv Detail & Related papers (2023-07-20T16:27:51Z) - Killing Two Birds with One Stone: Quantization Achieves Privacy in
Distributed Learning [18.824571167583432]
Communication efficiency and privacy protection are critical issues in distributed machine learning.
We propose a comprehensive quantization-based solution that could simultaneously achieve communication efficiency and privacy protection.
We theoretically capture the new trade-offs between communication, privacy, and learning performance.
arXiv Detail & Related papers (2023-04-26T13:13:04Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - A Linearly Convergent Algorithm for Decentralized Optimization: Sending
Less Bits for Free! [72.31332210635524]
Decentralized optimization methods enable on-device training of machine learning models without a central coordinator.
We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators.
We prove that our method can solve the problems without any increase in the number of communications compared to the baseline.
arXiv Detail & Related papers (2020-11-03T13:35:53Z) - FedSKETCH: Communication-Efficient and Private Federated Learning via
Sketching [33.54413645276686]
Communication complexity and privacy are the two key challenges in Federated Learning.
We introduce FedSKETCH and FedSKETCHGATE algorithms to address both challenges in Federated learning jointly.
arXiv Detail & Related papers (2020-08-11T19:22:48Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.