Toward Fair Federated Learning under Demographic Disparities and Data Imbalance
- URL: http://arxiv.org/abs/2505.09295v1
- Date: Wed, 14 May 2025 11:22:54 GMT
- Title: Toward Fair Federated Learning under Demographic Disparities and Data Imbalance
- Authors: Qiming Wu, Siqi Li, Doudou Zhou, Nan Liu,
- Abstract summary: Federated learning (FL) enables privacy-preserving collaboration across institutions.<n>We propose FedIDA, a framework-agnostic method that combines fairness-aware regularization with group-conditional oversampling.<n>We show that FedIDA consistently improves fairness while maintaining competitive predictive performance.
- Score: 8.444310568786408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring fairness is critical when applying artificial intelligence to high-stakes domains such as healthcare, where predictive models trained on imbalanced and demographically skewed data risk exacerbating existing disparities. Federated learning (FL) enables privacy-preserving collaboration across institutions, but remains vulnerable to both algorithmic bias and subgroup imbalance - particularly when multiple sensitive attributes intersect. We propose FedIDA (Fed erated Learning for Imbalance and D isparity A wareness), a framework-agnostic method that combines fairness-aware regularization with group-conditional oversampling. FedIDA supports multiple sensitive attributes and heterogeneous data distributions without altering the convergence behavior of the underlying FL algorithm. We provide theoretical analysis establishing fairness improvement bounds using Lipschitz continuity and concentration inequalities, and show that FedIDA reduces the variance of fairness metrics across test sets. Empirical results on both benchmark and real-world clinical datasets confirm that FedIDA consistently improves fairness while maintaining competitive predictive performance, demonstrating its effectiveness for equitable and privacy-preserving modeling in healthcare. The source code is available on GitHub.
Related papers
- A Unifying Human-Centered AI Fairness Framework [2.9385229328767988]
We introduce a unifying human-centered fairness framework that covers eight distinct fairness metrics.<n>Rather than privileging a single fairness notion, the framework enables stakeholders to assign weights across multiple fairness objectives.<n>We show that adjusting weights reveals nuanced trade-offs between different fairness metrics.
arXiv Detail & Related papers (2025-12-07T17:52:38Z) - Accurate Target Privacy Preserving Federated Learning Balancing Fairness and Utility [28.676852732262407]
Federated Learning (FL) enables collaborative model training without data sharing.<n>We introduce a differentially private fair FL algorithm that transforms this multi-objective optimization into a zero-sum game.<n>Our theoretical analysis reveals a surprising inverse relationship, i.e., stricter privacy protection limits the system's ability to detect and correct demographic biases.
arXiv Detail & Related papers (2025-10-30T07:14:55Z) - Reliable and Reproducible Demographic Inference for Fairness in Face Analysis [63.46525489354455]
We propose a fully reproducible DAI pipeline that replaces conventional end-to-end training with a modular transfer learning approach.<n>We audit this pipeline across three dimensions: accuracy, fairness, and a newly introduced notion of robustness, defined via intra-identity consistency.<n>Our results show that the proposed method outperforms strong baselines, particularly on ethnicity, which is the more challenging attribute.
arXiv Detail & Related papers (2025-10-23T12:22:02Z) - Fairness-Constrained Optimization Attack in Federated Learning [26.380464066437668]
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics.<n>This paper proposes an intentional fairness attack, where a client maliciously sends a biased model.<n>We evaluate our attack against the state-of-the-art Byzantine-robust and fairness-aware aggregation schemes over different datasets.
arXiv Detail & Related papers (2025-10-14T04:49:53Z) - FedFiTS: Fitness-Selected, Slotted Client Scheduling for Trustworthy Federated Learning in Healthcare AI [33.17279604575767]
Federated Learning (FL) has emerged as a powerful paradigm for privacy-preserving model training, yet deployments in sensitive domains such as healthcare face persistent challenges.<n>This paper introduces FedFiTS, a trust-aware selective FLTS that advances FedFaSt line by combining fitness-based client election with adaptive aggregation.
arXiv Detail & Related papers (2025-09-23T15:06:04Z) - A Comparative Benchmark of Federated Learning Strategies for Mortality Prediction on Heterogeneous and Imbalanced Clinical Data [0.0]
Federated Learning (FL) offers a privacy-preserving solution, but its performance under non-Independent and Identically Distributed (non-IID) and imbalanced conditions requires investigation.<n>This study presents a comparative benchmark of five federated learning strategies: FedAvg, FedProx, FedAdagrad, FedAdam, and FedCluster for mortality prediction.<n>Our findings indicate that regularization-based FL algorithms like FedProx offer a more robust and effective solution for heterogeneous and imbalanced clinical prediction tasks.
arXiv Detail & Related papers (2025-09-03T11:32:57Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [13.575259448363557]
We propose a controllable group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>Experiments on multiple datasets demonstrate that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility in Autonomous Vehicles [6.3338980105224145]
Existing FL frameworks struggle to balance privacy, fairness, and robustness, leading to performance disparities across demographic groups.<n>This work explores the trade-off between privacy and fairness in FL-based object detection for AVs and introduces RESFL, an integrated solution optimizing both.<n> RESFL incorporates adversarial privacy disentanglement and uncertainty-guided fairness-aware aggregation.<n>We evaluate RESFL on the FACET dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience, and robustness under varying conditions.
arXiv Detail & Related papers (2025-03-20T15:46:03Z) - FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction [10.016644624468762]
We present Fair Federated Machine Learning (FairFML), a model-agnostic solution designed to reduce algorithmic bias in cross-institutional healthcare collaborations.
As a proof of concept, we validated FairFML using a real-world clinical case study focused on reducing gender disparities in cardiac arrest outcome prediction.
Our findings show that FairFML improves model fairness by up to 65% compared to the centralized model, while maintaining performance comparable to both local and centralized models.
arXiv Detail & Related papers (2024-10-07T13:02:04Z) - Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data [45.11652096723593]
Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training on devices in edge networks.
This paper proposes FatCC, which incorporates local logit underlineCalibration and global feature underlineContrast into the vanilla federated adversarial training process from both logit and feature perspectives.
arXiv Detail & Related papers (2024-04-10T06:35:25Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - FOCUS: Fairness via Agent-Awareness for Federated Learning on
Heterogeneous Data [31.611582207768464]
Federated learning (FL) allows agents to jointly train a global model without sharing their local data.
We propose a formal FL fairness definition, fairness via agent-awareness (FAA), which takes different contributions of heterogeneous agents into account.
We also propose a fair FL training algorithm based on agent clustering (FOCUS) to achieve fairness in FL measured by FAA.
arXiv Detail & Related papers (2022-07-21T02:21:03Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.