Federated Learning with Sparsified Model Perturbation: Improving
Accuracy under Client-Level Differential Privacy
- URL: http://arxiv.org/abs/2202.07178v1
- Date: Tue, 15 Feb 2022 04:05:42 GMT
- Title: Federated Learning with Sparsified Model Perturbation: Improving
Accuracy under Client-Level Differential Privacy
- Authors: Rui Hu, Yanmin Gong and Yuanxiong Guo
- Abstract summary: Federated learning (FL) enables distributed clients to collaboratively learn a shared statistical model.
sensitive information about the training data can still be inferred from model updates shared in FL.
Differential privacy (DP) is the state-of-the-art technique to defend against those attacks.
This paper develops a novel FL scheme named Fed-SMP that provides client-level DP guarantee while maintaining high model accuracy.
- Score: 27.243322019117144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) that enables distributed clients to collaboratively
learn a shared statistical model while keeping their training data locally has
received great attention recently and can improve privacy and communication
efficiency in comparison with traditional centralized machine learning
paradigm. However, sensitive information about the training data can still be
inferred from model updates shared in FL. Differential privacy (DP) is the
state-of-the-art technique to defend against those attacks. The key challenge
to achieve DP in FL lies in the adverse impact of DP noise on model accuracy,
particularly for deep learning models with large numbers of model parameters.
This paper develops a novel differentially-private FL scheme named Fed-SMP that
provides client-level DP guarantee while maintaining high model accuracy. To
mitigate the impact of privacy protection on model accuracy, Fed-SMP leverages
a new technique called Sparsified Model Perturbation (SMP), where local models
are sparsified first before being perturbed with additive Gaussian noise. Two
sparsification strategies are considered in Fed-SMP: random sparsification and
top-$k$ sparsification. We also apply R{\'e}nyi differential privacy to
providing a tight analysis for the end-to-end DP guarantee of Fed-SMP and prove
the convergence of Fed-SMP with general loss functions. Extensive experiments
on real-world datasets are conducted to demonstrate the effectiveness of
Fed-SMP in largely improving model accuracy with the same level of DP guarantee
and saving communication cost simultaneously.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Towards the Flatter Landscape and Better Generalization in Federated
Learning under Client-level Differential Privacy [67.33715954653098]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates Sharpness Aware of Minimization (SAM) to generate local flatness models with stability and weight robustness.
To further reduce the magnitude random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique.
arXiv Detail & Related papers (2023-05-01T15:19:09Z) - Communication and Energy Efficient Wireless Federated Learning with
Intrinsic Privacy [16.305837225117603]
Federated Learning (FL) is a collaborative learning framework that enables edge devices to collaboratively learn a global model while keeping raw data locally.
We propose a novel wireless FL scheme called private edge learning with spars (PFELS) to provide client-level DP guarantee with intrinsic channel noise.
arXiv Detail & Related papers (2023-04-15T03:04:11Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.