Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective
- URL: http://arxiv.org/abs/2408.15621v2
- Date: Sat, 12 Oct 2024 08:27:49 GMT
- Title: Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective
- Authors: Yan Sun, Li Shen, Dacheng Tao,
- Abstract summary: Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
- Score: 57.35402286842029
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is an efficient collaborative training paradigm extensively developed with a focus on local privacy, and differential privacy (DP) is a classical approach to capture and ensure the reliability of private security. Their powerful cooperation provides a promising paradigm for the large-scale private clients. As a predominant implementation, the noisy perturbation has been widely studied, being theoretically proven to offer significant protections. However, existing analyses in FL-DP mostly rely on the composition theorem and cannot tightly quantify the privacy leakage challenges, which is tight for a few communication rounds but yields an arbitrarily loose and divergent bound eventually. This also implies a counterintuitive judgment, suggesting that FL-DP may not provide adequate privacy support during long-term training. To further investigate the convergent privacy and reliability of the FL-DP framework, in this paper, we comprehensively evaluate the worst privacy of two classical methods under the non-convex and smooth objectives based on the $f$-DP analysis. With the aid of the shifted interpolation technique, we successfully prove that privacy in {\ttfamily Noisy-FedAvg} has a tight convergent bound. Moreover, with the regularization of the proxy term, privacy in {\ttfamily Noisy-FedProx} has a stable constant lower bound. Our analysis further demonstrates a solid theoretical foundation for the reliability of privacy in FL-DP. Meanwhile, our conclusions can also be losslessly converted to other classical DP analytical frameworks, e.g. $(\epsilon,\delta)$-DP and R$\acute{\text{e}}$nyi-DP (RDP).
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD [46.71175773861434]
We propose a simple privacy analysis of noisy clipped gradient descent (DP-SGD)
We show experimentally that our is predictive of the outcome of privacy auditing applied to various training procedures.
We also empirically support our and show existing privacy auditing attacks are bounded by our analysis in both vision and language tasks.
arXiv Detail & Related papers (2024-10-08T16:51:10Z) - Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - Deciphering the Interplay between Local Differential Privacy, Average Bayesian Privacy, and Maximum Bayesian Privacy [5.622065847054885]
We introduce Bayesian privacy and delve into the relationship between LDP and its Bayesian counterparts, unveiling novel insights into utility-privacy trade-offs.
Our work not only lays the groundwork for future empirical exploration but also promises to facilitate the design of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-03-25T10:06:45Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Understanding the Interplay between Privacy and Robustness in Federated
Learning [15.673448030003788]
Federated Learning (FL) is emerging as a promising paradigm of privacy-preserving machine learning.
Recent works highlighted several privacy and robustness weaknesses in FL.
It is still not clear how LDP affects adversarial robustness in FL.
arXiv Detail & Related papers (2021-06-13T16:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.