DP-REC: Private & Communication-Efficient Federated Learning
- URL: http://arxiv.org/abs/2111.05454v1
- Date: Tue, 9 Nov 2021 23:33:11 GMT
- Title: DP-REC: Private & Communication-Efficient Federated Learning
- Authors: Aleksei Triastcyn, Matthias Reisser, Christos Louizos
- Abstract summary: We introduce a compression technique based on Relative Entropy Coding (REC) to the federated setting.
With a minor modification to REC, we obtain a provably differentially private learning algorithm, DP-REC, and show how to compute its privacy guarantees.
Our experiments demonstrate that DP-REC drastically reduces communication costs while providing privacy guarantees comparable to the state-of-the-art.
- Score: 16.884416092951007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy and communication efficiency are important challenges in federated
training of neural networks, and combining them is still an open problem. In
this work, we develop a method that unifies highly compressed communication and
differential privacy (DP). We introduce a compression technique based on
Relative Entropy Coding (REC) to the federated setting. With a minor
modification to REC, we obtain a provably differentially private learning
algorithm, DP-REC, and show how to compute its privacy guarantees. Our
experiments demonstrate that DP-REC drastically reduces communication costs
while providing privacy guarantees comparable to the state-of-the-art.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Private and Communication-Efficient Federated Learning based on Differentially Private Sketches [0.4533408985664949]
Federated learning (FL) faces two primary challenges: the risk of privacy leakage and communication inefficiencies.
We propose DPSFL, a federated learning method that utilizes differentially private sketches.
We provide a theoretical analysis of privacy and convergence for the proposed method.
arXiv Detail & Related papers (2024-10-08T06:50:41Z) - Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Privacy-Aware Joint Source-Channel Coding for image transmission based on Disentangled Information Bottleneck [27.929075969353764]
Current privacy-aware joint source-channel coding (JSCC) works aim at avoiding private information transmission by adversarially training the J SCC encoder and decoder.
We propose a novel privacy-aware J SCC based on disentangled information bottleneck (DIB-PAJSCC)
We show that DIB-PAJSCC can reduce the eavesdropping accuracy on private information by up to 20% compared to existing methods.
arXiv Detail & Related papers (2023-09-15T06:34:22Z) - Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - SoteriaFL: A Unified Framework for Private Federated Learning with
Communication Compression [40.646108010388986]
We propose a unified framework that enhances the communication efficiency of private federated learning with communication compression.
We provide a comprehensive characterization of its performance trade-offs in terms of privacy, utility, and communication complexity.
arXiv Detail & Related papers (2022-06-20T16:47:58Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.