LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy
- URL: http://arxiv.org/abs/2007.15789v2
- Date: Fri, 21 May 2021 11:24:54 GMT
- Title: LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy
- Authors: Lichao Sun, Jianwei Qian, Xun Chen
- Abstract summary: Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data.
Previous works do not give a practical solution due to three issues.
Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models.
- Score: 20.95527613004989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Train machine learning models on sensitive user data has raised increasing
privacy concerns in many areas. Federated learning is a popular approach for
privacy protection that collects the local gradient information instead of real
data. One way to achieve a strict privacy guarantee is to apply local
differential privacy into federated learning. However, previous works do not
give a practical solution due to three issues. First, the noisy data is close
to its original value with high probability, increasing the risk of information
exposure. Second, a large variance is introduced to the estimated average,
causing poor accuracy. Last, the privacy budget explodes due to the high
dimensionality of weights in deep learning models. In this paper, we proposed a
novel design of local differential privacy mechanism for federated learning to
address the abovementioned issues. It is capable of making the data more
distinct from its original value and introducing lower variance. Moreover, the
proposed mechanism bypasses the curse of dimensionality by splitting and
shuffling model updates. A series of empirical evaluations on three commonly
used datasets, MNIST, Fashion-MNIST and CIFAR-10, demonstrate that our solution
can not only achieve superior deep learning performance but also provide a
strong privacy guarantee at the same time.
Related papers
- Federated Transfer Learning with Differential Privacy [21.50525027559563]
We formulate the notion of textitfederated differential privacy, which offers privacy guarantees for each data set without assuming a trusted central server.
We show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Share Your Representation Only: Guaranteed Improvement of the
Privacy-Utility Tradeoff in Federated Learning [47.042811490685324]
Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free.
In this paper, we consider a representation learning objective that various parties collaboratively refine on a federated model, with differential privacy guarantees.
We observe a significant performance improvement over the prior work under the same small privacy budget.
arXiv Detail & Related papers (2023-09-11T14:46:55Z) - Differentially private partitioned variational inference [28.96767727430277]
Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
arXiv Detail & Related papers (2022-09-23T13:58:40Z) - Privacy-Preserving Distributed Expectation Maximization for Gaussian
Mixture Model using Subspace Perturbation [4.2698418800007865]
federated learning is motivated by the privacy concern as it does not allow to transmit private data but only intermediate updates.
We propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each step.
Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
arXiv Detail & Related papers (2022-09-16T09:58:03Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.