Privacy Assessment of Federated Learning using Private Personalized
Layers
- URL: http://arxiv.org/abs/2106.08060v1
- Date: Tue, 15 Jun 2021 11:40:16 GMT
- Title: Privacy Assessment of Federated Learning using Private Personalized
Layers
- Authors: Th\'eo Jourdan, Antoine Boutet, Carole Frindel
- Abstract summary: Federated Learning (FL) is a collaborative scheme to train a learning model across multiple participants without sharing data.
We quantify the utility and privacy trade-off of a FL scheme using private personalized layers.
- Score: 0.9023847175654603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a collaborative scheme to train a learning model
across multiple participants without sharing data. While FL is a clear step
forward towards enforcing users' privacy, different inference attacks have been
developed. In this paper, we quantify the utility and privacy trade-off of a FL
scheme using private personalized layers. While this scheme has been proposed
as local adaptation to improve the accuracy of the model through local
personalization, it has also the advantage to minimize the information about
the model exchanged with the server. However, the privacy of such a scheme has
never been quantified. Our evaluations using motion sensor dataset show that
personalized layers speed up the convergence of the model and slightly improve
the accuracy for all users compared to a standard FL scheme while better
preventing both attribute and membership inferences compared to a FL scheme
using local differential privacy.
Related papers
- Privacy-Preserving Federated Learning via Dataset Distillation [9.60829979241686]
Federated Learning (FL) allows users to share knowledge instead of raw data to train a model with high accuracy.
During the training, users lose control over the knowledge shared, which causes serious data privacy issues.
This work proposes FLiP, which aims to bring the principle of least privilege (PoLP) to FL training.
arXiv Detail & Related papers (2024-10-25T13:20:40Z) - DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Privacy-preserving gradient-based fair federated learning [0.0]
Federated learning (FL) schemes allow multiple participants to collaboratively train neural networks without the need to share the underlying data.
In our paper, we build upon seminal works and present a novel, fair and privacy-preserving FL scheme.
arXiv Detail & Related papers (2024-07-18T19:56:39Z) - Advancing Personalized Federated Learning: Group Privacy, Fairness, and
Beyond [6.731000738818571]
Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner.
In this paper, we address the triadic interaction among personalization, privacy guarantees, and fairness attained by models trained within the FL framework.
A method is put forth that introduces group privacy assurances through the utilization of $d$-privacy.
arXiv Detail & Related papers (2023-09-01T12:20:19Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated Learning with Local Differential Privacy: Trade-offs between
Privacy, Utility, and Communication [22.171647103023773]
Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure.
We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD.
Our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
arXiv Detail & Related papers (2021-02-09T10:04:18Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.