Fairness-aware Federated Minimax Optimization with Convergence Guarantee
- URL: http://arxiv.org/abs/2307.04417v5
- Date: Thu, 17 Oct 2024 04:56:28 GMT
- Title: Fairness-aware Federated Minimax Optimization with Convergence Guarantee
- Authors: Gerry Windiarto Mohamad Dunda, Shenghui Song,
- Abstract summary: Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
- Score: 10.727328530242461
- License:
- Abstract: Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature. Nonetheless, the lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender. To tackle this issue, this paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL. Specifically, we impose a fairness constraint on the training objective and solve the minimax reformulation of the constrained optimization problem. Then, we derive the theoretical upper bound for the convergence rate of FFALM. The effectiveness of FFALM in improving fairness is shown empirically on CelebA and UTKFace datasets in the presence of severe statistical heterogeneity.
Related papers
- PUFFLE: Balancing Privacy, Utility, and Fairness in Federated Learning [2.8304839563562436]
Training and deploying Machine Learning models that simultaneously adhere to principles of fairness and privacy poses a significant challenge.
We introduce PUFFLE, a high-level parameterised approach that can help in the exploration of the balance between utility, privacy, and fairness in FL scenarios.
We prove that PUFFLE can be effective across diverse datasets, models, and data distributions, reducing the model unfairness up to 75%, with a maximum reduction in the utility of 17% in the worst-case scenario.
arXiv Detail & Related papers (2024-07-21T17:22:18Z) - FedSat: A Statistical Aggregation Approach for Class Imbalaced Clients in Federated Learning [2.5628953713168685]
Federated learning (FL) has emerged as a promising paradigm for privacy-preserving distributed machine learning.
This paper introduces FedSat, a novel FL approach designed to tackle various forms of data heterogeneity simultaneously.
arXiv Detail & Related papers (2024-07-04T11:50:24Z) - FedFDP: Fairness-Aware Federated Learning with Differential Privacy [21.55903748640851]
Federated learning (FL) is a new machine learning paradigm to overcome the challenge of data silos.
We first propose a fairness-aware federated learning algorithm, termed FedFair.
We then introduce differential privacy protection to form the FedFDP algorithm to address the trade-offs among fairness, privacy protection, and model performance.
arXiv Detail & Related papers (2024-02-25T08:35:21Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - FOCUS: Fairness via Agent-Awareness for Federated Learning on
Heterogeneous Data [31.611582207768464]
Federated learning (FL) allows agents to jointly train a global model without sharing their local data.
We propose a formal FL fairness definition, fairness via agent-awareness (FAA), which takes different contributions of heterogeneous agents into account.
We also propose a fair FL training algorithm based on agent clustering (FOCUS) to achieve fairness in FL measured by FAA.
arXiv Detail & Related papers (2022-07-21T02:21:03Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre
Optimization [69.07420650261649]
We introduce a novel, simple, and powerful contrastive MI estimator named as FLO.
Empirically, our FLO estimator overcomes the limitations of its predecessors and learns more efficiently.
The utility of FLO is verified using an extensive set of benchmarks, which also reveals the trade-offs in practical MI estimation.
arXiv Detail & Related papers (2021-07-02T15:20:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.