Subject Granular Differential Privacy in Federated Learning
- URL: http://arxiv.org/abs/2206.03617v2
- Date: Thu, 15 Jun 2023 15:37:11 GMT
- Title: Subject Granular Differential Privacy in Federated Learning
- Authors: Virendra J. Marathe and Pallika Kanani and Daniel W. Peterson and Guy
Steele Jr
- Abstract summary: We propose two new algorithms that enforce subject level DP at each federation user locally.
Our first algorithm, called LocalGroupDP, is a straightforward application of group differential privacy in the popular DP-SGD algorithm.
Our second algorithm is based on a novel idea of hierarchical gradient averaging (HiGradAvgDP) for subjects participating in a training mini-batch.
- Score: 2.9439848714137447
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper considers subject level privacy in the FL setting, where a subject
is an individual whose private information is embodied by several data items
either confined within a single federation user or distributed across multiple
federation users. We propose two new algorithms that enforce subject level DP
at each federation user locally. Our first algorithm, called LocalGroupDP, is a
straightforward application of group differential privacy in the popular DP-SGD
algorithm. Our second algorithm is based on a novel idea of hierarchical
gradient averaging (HiGradAvgDP) for subjects participating in a training
mini-batch. We also show that user level Local Differential Privacy (LDP)
naturally guarantees subject level DP. We observe the problem of horizontal
composition of subject level privacy loss in FL - subject level privacy loss
incurred at individual users composes across the federation. We formally prove
the subject level DP guarantee for our algorithms, and also show their effect
on model utility loss. Our empirical evaluation on FEMNIST and Shakespeare
datasets shows that LocalGroupDP delivers the best performance among our
algorithms. However, its model utility lags behind that of models trained using
a DP-SGD based algorithm that provides a weaker item level privacy guarantee.
Privacy loss amplification due to subject sampling fractions and horizontal
composition remain key challenges for model utility.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - ULDP-FL: Federated Learning with Across Silo User-Level Differential Privacy [19.017342515321918]
Differentially Private Federated Learning (DP-FL) has garnered attention as a collaborative machine learning approach that ensures formal privacy.
We present Uldp-FL, a novel FL framework designed to guarantee user-level DP in cross-silo FL where a single user's data may belong to multiple silos.
arXiv Detail & Related papers (2023-08-23T15:50:51Z) - Fairness-aware Differentially Private Collaborative Filtering [22.815168994407358]
We propose textbfDP-Fair, a two-stage framework for collaborative filtering based algorithms.
Specifically, it combines differential privacy mechanisms with fairness constraints to protect user privacy while ensuring fair recommendations.
arXiv Detail & Related papers (2023-03-16T17:44:39Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - On Privacy and Personalization in Cross-Silo Federated Learning [39.031422430404405]
In this work, we consider the application of differential privacy in cross-silo learning (FL)
We show that mean-regularized multi-task learning (MR-MTL) is a strong baseline for cross-silo FL.
We provide a thorough empirical study of competing methods as well as a theoretical characterization of MR-MTL for a mean estimation problem.
arXiv Detail & Related papers (2022-06-16T03:26:48Z) - Hierarchical Federated Learning with Privacy [22.392087447002567]
Federated learning (FL) where data remains at the federated clients, and where only gradient updates are shared with a central aggregator, was assumed to be private.
Recent work demonstrates that adversaries with gradient-level access can mount successful inference and reconstruction attacks.
In this work, we take the first step towards mitigating such trade-offs through em hierarchical FL (HFL).
arXiv Detail & Related papers (2022-06-10T16:10:42Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Differentially Private Federated Learning on Heterogeneous Data [10.431137628048356]
Federated Learning (FL) is a paradigm for large-scale distributed learning.
It faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
We propose a novel FL approach to tackle these two challenges together by incorporating Differential Privacy (DP) constraints.
arXiv Detail & Related papers (2021-11-17T18:23:49Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.