Hierarchical Federated Learning with Privacy
- URL: http://arxiv.org/abs/2206.05209v1
- Date: Fri, 10 Jun 2022 16:10:42 GMT
- Title: Hierarchical Federated Learning with Privacy
- Authors: Varun Chandrasekaran, Suman Banerjee, Diego Perino, Nicolas Kourtellis
- Abstract summary: Federated learning (FL) where data remains at the federated clients, and where only gradient updates are shared with a central aggregator, was assumed to be private.
Recent work demonstrates that adversaries with gradient-level access can mount successful inference and reconstruction attacks.
In this work, we take the first step towards mitigating such trade-offs through em hierarchical FL (HFL).
- Score: 22.392087447002567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), where data remains at the federated clients, and
where only gradient updates are shared with a central aggregator, was assumed
to be private. Recent work demonstrates that adversaries with gradient-level
access can mount successful inference and reconstruction attacks. In such
settings, differentially private (DP) learning is known to provide resilience.
However, approaches used in the status quo (\ie central and local DP) introduce
disparate utility vs. privacy trade-offs. In this work, we take the first step
towards mitigating such trade-offs through {\em hierarchical FL (HFL)}. We
demonstrate that by the introduction of a new intermediary level where
calibrated DP noise can be added, better privacy vs. utility trade-offs can be
obtained; we term this {\em hierarchical DP (HDP)}. Our experiments with 3
different datasets (commonly used as benchmarks for FL) suggest that HDP
produces models as accurate as those obtained using central DP, where noise is
added at a central aggregator. Such an approach also provides comparable
benefit against inference adversaries as in the local DP case, where noise is
added at the federated clients.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Secure Stateful Aggregation: A Practical Protocol with Applications in Differentially-Private Federated Learning [36.42916779389165]
DP-FTRL based approaches have already seen widespread deployment in industry.
We introduce secure stateful aggregation: a simple append-only data structure that allows for the private storage of aggregate values.
We observe that secure stateful aggregation suffices for realizing DP-FTRL-based private federated learning.
arXiv Detail & Related papers (2024-10-15T07:45:18Z) - Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Make Landscape Flatter in Differentially Private Federated Learning [69.78485792860333]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates local flatness models with better stability and weight robustness, which results in the small norm of local updates and robustness to DP noise.
Our algorithm achieves state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.
arXiv Detail & Related papers (2023-03-20T16:27:36Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Adap DP-FL: Differentially Private Federated Learning with Adaptive
Noise [30.005017338416327]
Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models.
Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much.
We propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL)
arXiv Detail & Related papers (2022-11-29T03:20:40Z) - Subject Granular Differential Privacy in Federated Learning [2.9439848714137447]
We propose two new algorithms that enforce subject level DP at each federation user locally.
Our first algorithm, called LocalGroupDP, is a straightforward application of group differential privacy in the popular DP-SGD algorithm.
Our second algorithm is based on a novel idea of hierarchical gradient averaging (HiGradAvgDP) for subjects participating in a training mini-batch.
arXiv Detail & Related papers (2022-06-07T23:54:36Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.