DP-REC: Private & Communication-Efficient Federated Learning
- URL: http://arxiv.org/abs/2111.05454v1
- Date: Tue, 9 Nov 2021 23:33:11 GMT
- Title: DP-REC: Private & Communication-Efficient Federated Learning
- Authors: Aleksei Triastcyn, Matthias Reisser, Christos Louizos
- Abstract summary: We introduce a compression technique based on Relative Entropy Coding (REC) to the federated setting.
With a minor modification to REC, we obtain a provably differentially private learning algorithm, DP-REC, and show how to compute its privacy guarantees.
Our experiments demonstrate that DP-REC drastically reduces communication costs while providing privacy guarantees comparable to the state-of-the-art.
- Score: 16.884416092951007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy and communication efficiency are important challenges in federated
training of neural networks, and combining them is still an open problem. In
this work, we develop a method that unifies highly compressed communication and
differential privacy (DP). We introduce a compression technique based on
Relative Entropy Coding (REC) to the federated setting. With a minor
modification to REC, we obtain a provably differentially private learning
algorithm, DP-REC, and show how to compute its privacy guarantees. Our
experiments demonstrate that DP-REC drastically reduces communication costs
while providing privacy guarantees comparable to the state-of-the-art.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Private and Communication-Efficient Federated Learning based on Differentially Private Sketches [0.4533408985664949]
Federated learning (FL) faces two primary challenges: the risk of privacy leakage and communication inefficiencies.
We propose DPSFL, a federated learning method that utilizes differentially private sketches.
We provide a theoretical analysis of privacy and convergence for the proposed method.
arXiv Detail & Related papers (2024-10-08T06:50:41Z) - Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Privacy-Aware Joint Source-Channel Coding for image transmission based on Disentangled Information Bottleneck [27.929075969353764]
Current privacy-aware joint source-channel coding (JSCC) works aim at avoiding private information transmission by adversarially training the J SCC encoder and decoder.
We propose a novel privacy-aware J SCC based on disentangled information bottleneck (DIB-PAJSCC)
We show that DIB-PAJSCC can reduce the eavesdropping accuracy on private information by up to 20% compared to existing methods.
arXiv Detail & Related papers (2023-09-15T06:34:22Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Differentially Private Decentralized Optimization with Relay Communication [1.2695958417031445]
We introduce a new measure: Privacy Leakage Frequency (PLF), which reveals the relationship between communication and privacy leakage of algorithms.
A novel differentially private decentralized primal--dual algorithm named DP-RECAL is proposed to take advantage of operator splitting method and relay communication mechanism to experience less PLF.
arXiv Detail & Related papers (2022-12-21T09:05:36Z) - SoteriaFL: A Unified Framework for Private Federated Learning with
Communication Compression [40.646108010388986]
We propose a unified framework that enhances the communication efficiency of private federated learning with communication compression.
We provide a comprehensive characterization of its performance trade-offs in terms of privacy, utility, and communication complexity.
arXiv Detail & Related papers (2022-06-20T16:47:58Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.