FedFDP: Fairness-Aware Federated Learning with Differential Privacy
- URL: http://arxiv.org/abs/2402.16028v4
- Date: Mon, 02 Dec 2024 03:17:08 GMT
- Title: FedFDP: Fairness-Aware Federated Learning with Differential Privacy
- Authors: Xinpeng Ling, Jie Fu, Kuncan Wang, Huifa Li, Tong Cheng, Zhili Chen, Haifeng Qian, Junqing Gong,
- Abstract summary: Federated learning (FL) is an emerging machine learning paradigm designed to address the challenge of data silos.<n>To tackle persistent issues related to fairness and data privacy, we propose a fairness-aware FL algorithm called FedFair.<n>Building on FedFair, we introduce differential privacy to create the FedFDP algorithm, which addresses trade-offs among fairness, privacy protection, and model performance.
- Score: 28.58589747796768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging machine learning paradigm designed to address the challenge of data silos, attracting considerable attention. However, FL encounters persistent issues related to fairness and data privacy. To tackle these challenges simultaneously, we propose a fairness-aware federated learning algorithm called FedFair. Building on FedFair, we introduce differential privacy to create the FedFDP algorithm, which addresses trade-offs among fairness, privacy protection, and model performance. In FedFDP, we developed a fairness-aware gradient clipping technique to explore the relationship between fairness and differential privacy. Through convergence analysis, we identified the optimal fairness adjustment parameters to achieve both maximum model performance and fairness. Additionally, we present an adaptive clipping method for uploaded loss values to reduce privacy budget consumption. Extensive experimental results show that FedFDP significantly surpasses state-of-the-art solutions in both model performance and fairness.
Related papers
- RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility in Autonomous Vehicles [6.3338980105224145]
Existing FL frameworks struggle to balance privacy, fairness, and robustness, leading to performance disparities across demographic groups.
This work explores the trade-off between privacy and fairness in FL-based object detection for AVs and introduces RESFL, an integrated solution optimizing both.
RESFL incorporates adversarial privacy disentanglement and uncertainty-guided fairness-aware aggregation.
We evaluate RESFL on the FACET dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience, and robustness under varying conditions.
arXiv Detail & Related papers (2025-03-20T15:46:03Z) - A Stochastic Optimization Framework for Private and Fair Learning From Decentralized Data [14.748203847227542]
We develop a novel algorithm for private and fair federated learning (FL)
Our algorithm satisfies inter-silo record-level differential privacy (ISRL-DP)
Experiments demonstrate the state-of-the-art fairness-accuracy framework tradeoffs of our algorithm across different privacy levels.
arXiv Detail & Related papers (2024-11-12T15:51:35Z) - FedMABA: Towards Fair Federated Learning through Multi-Armed Bandits Allocation [26.52731463877256]
In this paper, we introduce the concept of adversarial multi-armed bandit to optimize the proposed objective with explicit constraints on performance disparities.
Practically, we propose a novel multi-armed bandit-based allocation FL algorithm (FedMABA) to mitigate performance unfairness among diverse clients with different data distributions.
arXiv Detail & Related papers (2024-10-26T10:41:45Z) - PUFFLE: Balancing Privacy, Utility, and Fairness in Federated Learning [2.8304839563562436]
Training and deploying Machine Learning models that simultaneously adhere to principles of fairness and privacy poses a significant challenge.
We introduce PUFFLE, a high-level parameterised approach that can help in the exploration of the balance between utility, privacy, and fairness in FL scenarios.
We prove that PUFFLE can be effective across diverse datasets, models, and data distributions, reducing the model unfairness up to 75%, with a maximum reduction in the utility of 17% in the worst-case scenario.
arXiv Detail & Related papers (2024-07-21T17:22:18Z) - Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models [6.921070916461661]
Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy.
One-shot federated learning has emerged as a solution by reducing communication rounds, improving efficiency, and providing better security against eavesdropping attacks.
arXiv Detail & Related papers (2024-05-02T17:26:52Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Enforcing fairness in private federated learning via the modified method
of differential multipliers [1.3381749415517021]
Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy.
This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices.
arXiv Detail & Related papers (2021-09-17T15:28:47Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.