LDP-Fed: Federated Learning with Local Differential Privacy
- URL: http://arxiv.org/abs/2006.03637v1
- Date: Fri, 5 Jun 2020 19:15:13 GMT
- Title: LDP-Fed: Federated Learning with Local Differential Privacy
- Authors: Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, Wenqi Wei
- Abstract summary: We present LDP-Fed, a novel federated learning system with a formal privacy guarantee using local differential privacy (LDP)
Existing LDP protocols are developed primarily to ensure data privacy in the collection of single numerical or categorical values.
In federated learning model parameter updates are collected iteratively from each participant.
- Score: 14.723892247530234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents LDP-Fed, a novel federated learning system with a formal
privacy guarantee using local differential privacy (LDP). Existing LDP
protocols are developed primarily to ensure data privacy in the collection of
single numerical or categorical values, such as click count in Web access logs.
However, in federated learning model parameter updates are collected
iteratively from each participant and consist of high dimensional, continuous
values with high precision (10s of digits after the decimal point), making
existing LDP protocols inapplicable. To address this challenge in LDP-Fed, we
design and develop two novel approaches. First, LDP-Fed's LDP Module provides a
formal differential privacy guarantee for the repeated collection of model
training parameters in the federated training of large-scale neural networks
over multiple individual participants' private datasets. Second, LDP-Fed
implements a suite of selection and filtering techniques for perturbing and
sharing select parameter updates with the parameter server. We validate our
system deployed with a condensed LDP protocol in training deep neural networks
on public data. We compare this version of LDP-Fed, coined CLDP-Fed, with other
state-of-the-art approaches with respect to model accuracy, privacy
preservation, and system capabilities.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Efficient Verifiable Differential Privacy with Input Authenticity in the Local and Shuffle Model [3.208888890455612]
Local differential privacy (LDP) is an efficient solution for providing privacy to client's sensitive data while simultaneously releasing aggregate statistics.
LDP has been shown to be vulnerable to malicious clients who can perform both input and output manipulation attacks.
We show how to prevent malicious clients from compromising LDP schemes.
arXiv Detail & Related papers (2024-06-27T07:12:28Z) - DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation [15.023077875990614]
Federated learning (FL) allows clients to collaboratively train a global model without sharing their local data with a server.
Differential privacy (DP) addresses such leakage by providing formal privacy guarantees, with mechanisms that add randomness to the clients' contributions.
We propose an adaptation method that can be combined with differential privacy and call it DP-DyLoRA.
arXiv Detail & Related papers (2024-05-10T10:10:37Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Locally Differentially Private Bayesian Inference [23.882144188177275]
Local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
We provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP.
arXiv Detail & Related papers (2021-10-27T13:36:43Z) - Representation Learning for High-Dimensional Data Collection under Local
Differential Privacy [18.98782927283319]
Local differential privacy (LDP) offers a rigorous approach to preserving privacy.
Existing LDP mechanisms have successfully been applied to low-dimensional data.
In high dimensions the privacy-inducing noise largely destroys the utility of the data.
arXiv Detail & Related papers (2020-10-23T15:01:19Z) - Voting-based Approaches For Differentially Private Federated Learning [87.2255217230752]
This work is inspired by knowledge transfer non-federated privacy learning from Papernot et al.
We design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients.
Our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.
arXiv Detail & Related papers (2020-10-09T23:55:19Z) - Towards Differentially Private Text Representations [52.64048365919954]
We develop a new deep learning framework under an untrusted server setting.
For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter $epsilon$ on accuracy.
Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols.
arXiv Detail & Related papers (2020-06-25T04:42:18Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.