A Theoretical Analysis of Efficiency Constrained Utility-Privacy
Bi-Objective Optimization in Federated Learning
- URL: http://arxiv.org/abs/2312.16554v2
- Date: Mon, 29 Jan 2024 12:27:56 GMT
- Title: A Theoretical Analysis of Efficiency Constrained Utility-Privacy
Bi-Objective Optimization in Federated Learning
- Authors: Hanlin Gu, Xinyuan Zhao, Gongxi Zhu, Yuxing Han, Yan Kang, Lixin Fan,
Qiang Yang
- Abstract summary: Federated learning (FL) enables multiple clients to collaboratively learn a shared model without sharing their individual data.
Differential privacy has emerged as a prevalent technique in FL, safeguarding the privacy of individual user data while impacting utility and training efficiency.
This paper systematically formulates an efficiency-constrained utility-privacy bi-objective optimization problem in DPFL.
- Score: 23.563789510998333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables multiple clients to collaboratively learn a
shared model without sharing their individual data. Concerns about utility,
privacy, and training efficiency in FL have garnered significant research
attention. Differential privacy has emerged as a prevalent technique in FL,
safeguarding the privacy of individual user data while impacting utility and
training efficiency. Within Differential Privacy Federated Learning (DPFL),
previous studies have primarily focused on the utility-privacy trade-off,
neglecting training efficiency, which is crucial for timely completion.
Moreover, differential privacy achieves privacy by introducing controlled
randomness (noise) on selected clients in each communication round. Previous
work has mainly examined the impact of noise level ($\sigma$) and communication
rounds ($T$) on the privacy-utility dynamic, overlooking other influential
factors like the sample ratio ($q$, the proportion of selected clients). This
paper systematically formulates an efficiency-constrained utility-privacy
bi-objective optimization problem in DPFL, focusing on $\sigma$, $T$, and $q$.
We provide a comprehensive theoretical analysis, yielding analytical solutions
for the Pareto front. Extensive empirical experiments verify the validity and
efficacy of our analysis, offering valuable guidance for low-cost parameter
design in DPFL.
Related papers
- Personalized Federated Learning Techniques: Empirical Analysis [2.9521571597754885]
We empirically evaluate ten prominent pFL techniques across various datasets and data splits, uncovering significant differences in their performance.
Our study emphasizes the critical role of communication efficiency in scaling pFL, demonstrating how it can significantly affect resource usage in real-world deployments.
arXiv Detail & Related papers (2024-09-10T18:16:28Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated
Learning as a Service [15.94482624965024]
Federated learning (FL) has emerged as a prevalent distributed machine learning scheme.
We propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimize both efficiency and fairness.
We show that DPBalance achieves an average efficiency improvement of $1.44times sim 3.49 times$, and an average fairness improvement of $1.37times sim 24.32 times$.
arXiv Detail & Related papers (2024-02-15T05:19:53Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Probably Approximately Correct Federated Learning [20.85915650297227]
Federated learning (FL) is a new distributed learning paradigm with privacy, utility, and efficiency as its primary pillars.
Existing research indicates that it is unlikely to simultaneously attain infinitesimal privacy leakage, utility loss, and efficiency.
How to find an optimal trade-off solution is the key consideration when designing the FL algorithm.
arXiv Detail & Related papers (2023-04-10T15:12:34Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.