Make Landscape Flatter in Differentially Private Federated Learning
- URL: http://arxiv.org/abs/2303.11242v2
- Date: Mon, 26 Jun 2023 05:53:49 GMT
- Title: Make Landscape Flatter in Differentially Private Federated Learning
- Authors: Yifan Shi, Yingqi Liu, Kang Wei, Li Shen, Xueqian Wang, Dacheng Tao
- Abstract summary: We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates local flatness models with better stability and weight robustness, which results in the small norm of local updates and robustness to DP noise.
Our algorithm achieves state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.
- Score: 69.78485792860333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To defend the inference attacks and mitigate the sensitive information
leakages in Federated Learning (FL), client-level Differentially Private FL
(DPFL) is the de-facto standard for privacy protection by clipping local
updates and adding random noise. However, existing DPFL methods tend to make a
sharper loss landscape and have poorer weight perturbation robustness,
resulting in severe performance degradation. To alleviate these issues, we
propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient
perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM
integrates Sharpness Aware Minimization (SAM) optimizer to generate local
flatness models with better stability and weight perturbation robustness, which
results in the small norm of local updates and robustness to DP noise, thereby
improving the performance. From the theoretical perspective, we analyze in
detail how DP-FedSAM mitigates the performance degradation induced by DP.
Meanwhile, we give rigorous privacy guarantees with R\'enyi DP and present the
sensitivity analysis of local updates. At last, we empirically confirm that our
algorithm achieves state-of-the-art (SOTA) performance compared with existing
SOTA baselines in DPFL. Code is available at
https://github.com/YMJS-Irfan/DP-FedSAM
Related papers
- DP$^2$-FedSAM: Enhancing Differentially Private Federated Learning Through Personalized Sharpness-Aware Minimization [8.022417295372492]
Federated learning (FL) is a distributed machine learning approach that allows multiple clients to collaboratively train a model without sharing their raw data.
To prevent sensitive information from being inferred through the model updates shared in FL, differentially private federated learning (DPFL) has been proposed.
DPFL ensures formal and rigorous privacy protection in FL by clipping and adding random noise to the shared model updates.
We propose DP$2$-FedSAM: Differentially Private and Personalized Federated Learning with Sharpness-Aware Minimization.
arXiv Detail & Related papers (2024-09-20T16:49:01Z) - DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction [47.65999101635902]
Differentially private (DP) training prevents the leakage of sensitive information in the collected training data from trained machine learning models.
We develop a new component, called DOPPLER, which works by effectively amplifying the gradient while DP noise within this frequency domain.
Our experiments show that the proposed DPs with a lowpass filter outperform their counterparts without the filter by 3%-10% in test accuracy.
arXiv Detail & Related papers (2024-08-24T04:27:07Z) - Mitigating Disparate Impact of Differential Privacy in Federated Learning through Robust Clustering [4.768272342753616]
Federated Learning (FL) is a decentralized machine learning (ML) approach that keeps data localized and often incorporates Differential Privacy (DP) to enhance privacy guarantees.
Recent work has attempted to address performance fairness in vanilla FL through clustering, but this method remains sensitive and prone to errors.
We propose a novel clustered DPFL algorithm designed to effectively identify clients' clusters in highly heterogeneous settings.
arXiv Detail & Related papers (2024-05-29T17:03:31Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Towards the Flatter Landscape and Better Generalization in Federated
Learning under Client-level Differential Privacy [67.33715954653098]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates Sharpness Aware of Minimization (SAM) to generate local flatness models with stability and weight robustness.
To further reduce the magnitude random noise while achieving better performance, we propose DP-FedSAM-$top_k$ by adopting the local update sparsification technique.
arXiv Detail & Related papers (2023-05-01T15:19:09Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.