Federated Learning with Relative Fairness
- URL: http://arxiv.org/abs/2411.01161v1
- Date: Sat, 02 Nov 2024 07:12:49 GMT
- Title: Federated Learning with Relative Fairness
- Authors: Shogo Nakakita, Tatsuya Kaneko, Shinya Takamaeda-Yamazaki, Masaaki Imaizumi,
- Abstract summary: This paper proposes a federated learning framework designed to achieve textitrelative fairness for clients.
The proposed framework uses a minimax problem approach to minimize relative unfairness, extending previous methods in distributionally robust optimization (DRO)
A novel fairness index, based on the ratio between large and small losses among clients, is introduced, allowing the framework to assess and improve the relative fairness of trained models.
- Score: 6.460475042590685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a federated learning framework designed to achieve \textit{relative fairness} for clients. Traditional federated learning frameworks typically ensure absolute fairness by guaranteeing minimum performance across all client subgroups. However, this approach overlooks disparities in model performance between subgroups. The proposed framework uses a minimax problem approach to minimize relative unfairness, extending previous methods in distributionally robust optimization (DRO). A novel fairness index, based on the ratio between large and small losses among clients, is introduced, allowing the framework to assess and improve the relative fairness of trained models. Theoretical guarantees demonstrate that the framework consistently reduces unfairness. We also develop an algorithm, named \textsc{Scaff-PD-IA}, which balances communication and computational efficiency while maintaining minimax-optimal convergence rates. Empirical evaluations on real-world datasets confirm its effectiveness in maintaining model performance while reducing disparity.
Related papers
- Enforcing Fairness Where It Matters: An Approach Based on Difference-of-Convex Constraints [12.054667230143803]
We focus on achieving full fairness across all score ranges by predictive models, ensuring in both high and low-scoring populations.<n>We propose a novel score of interest as the middle where decisions are most contested, while maintaining flexibility in other regions.<n>We introduce two statistical metrics to rigorously evaluate fairness within a given score range.
arXiv Detail & Related papers (2025-05-18T19:50:01Z) - FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated Learning [12.713572267830658]
textttFedTilt is a novel FL that can preserve multi-level fairness and be robust to outliers.
We show how tuning tilt values can achieve the two-level fairness and mitigate the persistent outliers.
arXiv Detail & Related papers (2025-03-15T19:57:23Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.<n>Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Towards Fairness-Aware Adversarial Learning [13.932705960012846]
We propose a novel learning paradigm, named Fairness-Aware Adversarial Learning (FAAL)
Our method aims to find the worst distribution among different categories, and the solution is guaranteed to obtain the upper bound performance with high probability.
In particular, FAAL can fine-tune an unfair robust model to be fair within only two epochs, without compromising the overall clean and robust accuracies.
arXiv Detail & Related papers (2024-02-27T18:01:59Z) - Integrating Fairness and Model Pruning Through Bi-level Optimization [16.213634992886384]
We introduce a novel concept of fair model pruning, which involves developing a sparse model that adheres to fairness criteria.
In particular, we propose a framework to jointly optimize the pruning mask and weight update processes with fairness constraints.
This framework is engineered to compress models that maintain performance while ensuring fairness in a unified process.
arXiv Detail & Related papers (2023-12-15T20:08:53Z) - f-FERM: A Scalable Framework for Robust Fair Empirical Risk Minimization [9.591164070876689]
This paper presents a unified optimization framework for fair empirical risk based on f-divergence measures (f-FERM)
In addition, our experiments demonstrate the superiority of fairness-accuracy tradeoffs offered by f-FERM for almost all batch sizes.
Our extension is based on a distributionally robust optimization reformulation of f-FERM objective under $L_p$ norms as uncertainty sets.
arXiv Detail & Related papers (2023-12-06T03:14:16Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Entropy-driven Fair and Effective Federated Learning [26.22014904183881]
Federated Learning (FL) enables collaborative model training across distributed devices while preserving data privacy.<n>We propose a novel that leverages Theoretical-based aggregation combined with model and gradient alignments to simultaneously optimize and global model performance.
arXiv Detail & Related papers (2023-01-29T10:02:42Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Fair and Consistent Federated Learning [48.19977689926562]
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
arXiv Detail & Related papers (2021-08-19T01:56:08Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.