FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
- URL: http://arxiv.org/abs/2506.21095v2
- Date: Tue, 15 Jul 2025 13:22:28 GMT
- Title: FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
- Authors: Xenia Heilmann, Luca Corbucci, Mattia Cerrato, Anna Monreale,
- Abstract summary: Federated Learning (FL) enables collaborative model training across multiple clients without sharing clients' private data.<n>Heterogeneous data distributions across clients may lead to models that are fairer for some clients than others.<n>We introduce FeDa4Fair, a library to generate datasets tailored to evaluating fair FL methods under heterogeneous client bias.
- Score: 3.156133122658662
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning (FL) enables collaborative model training across multiple clients without sharing clients' private data. However, fairness remains a key concern, as biases in local clients' datasets can impact the entire federated system. Heterogeneous data distributions across clients may lead to models that are fairer for some clients than others. Although several fairness-enhancing solutions are present in the literature, most focus on mitigating bias for a single sensitive attribute, typically binary, overlooking the diverse and sometimes conflicting fairness needs of different clients. This limited perspective can limit the effectiveness of fairness interventions for the different clients. To support more robust and reproducible fairness research in FL, we aim to enable a consistent benchmarking of fairness-aware FL methods at both the global and client levels. In this paper, we contribute in three ways: (1) We introduce FeDa4Fair, a library to generate tabular datasets tailored to evaluating fair FL methods under heterogeneous client bias; (2) we release four bias-heterogeneous datasets and corresponding benchmarks to compare fairness mitigation methods in a controlled environment; (3) we provide ready-to-use functions for evaluating fairness outcomes for these datasets.
Related papers
- Mitigating Group-Level Fairness Disparities in Federated Visual Language Models [115.16940773660104]
This paper introduces FVL-FP, a novel framework that combines FL with fair prompt tuning techniques.<n>We focus on mitigating demographic biases while preserving model performance.<n>Our approach reduces demographic disparity by an average of 45% compared to standard FL approaches.
arXiv Detail & Related papers (2025-05-03T16:09:52Z) - pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning [17.879602968559198]
Federated learning algorithms aim to maximize clients' accuracy by training a model on their collective data.<n>Group fairness constraints can be incorporated into the objective function of the FL optimization problem.<n>We show that such an approach would lead to suboptimal classification accuracy in an FL setting with heterogeneous client distributions.
arXiv Detail & Related papers (2025-03-19T06:15:31Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Friends in Unexpected Places: Enhancing Local Fairness in Federated Learning through Clustering [15.367801388932145]
Federated Learning (FL) has been a pivotal paradigm for collaborative training of machine learning models across distributed datasets.<n>In this paper, we propose new FL algorithms for heterogeneous settings, spanning the space between personalized and locally fair FL.
arXiv Detail & Related papers (2024-07-27T19:55:18Z) - Achieving Fairness Across Local and Global Models in Federated Learning [9.902848777262918]
This study introduces textttEquiFL, a novel approach designed to enhance both local and global fairness in Federated Learning environments.
textttEquiFL incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness.
We demonstrate that textttEquiFL not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness.
arXiv Detail & Related papers (2024-06-24T19:42:16Z) - FedFair^3: Unlocking Threefold Fairness in Federated Learning [6.481470306093991]
Federated Learning (FL) is an emerging paradigm in machine learning without exposing clients' raw data.
We propose a fair client-selection approach that unlocks threefold fairness in federated learning.
arXiv Detail & Related papers (2024-01-29T17:56:15Z) - GLOCALFAIR: Jointly Improving Global and Local Group Fairness in Federated Learning [8.033939709734451]
Federated learning (FL) has emerged as a prospective solution for collaboratively learning a shared model across clients without sacrificing their data privacy.
FL tends to be biased against certain demographic groups due to the inherent FL properties, such as data heterogeneity and party selection.
We propose GFAIR, a client-server codesign that can improve global and local group fairness without the need for sensitive statistics about the client's private datasets.
arXiv Detail & Related papers (2024-01-07T18:10:14Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.