FedFair^3: Unlocking Threefold Fairness in Federated Learning
- URL: http://arxiv.org/abs/2401.16350v1
- Date: Mon, 29 Jan 2024 17:56:15 GMT
- Title: FedFair^3: Unlocking Threefold Fairness in Federated Learning
- Authors: Simin Javaherian, Sanjeev Panta, Shelby Williams, Md Sirajul Islam, Li
Chen
- Abstract summary: Federated Learning (FL) is an emerging paradigm in machine learning without exposing clients' raw data.
We propose a fair client-selection approach that unlocks threefold fairness in federated learning.
- Score: 6.481470306093991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an emerging paradigm in machine learning without
exposing clients' raw data. In practical scenarios with numerous clients,
encouraging fair and efficient client participation in federated learning is of
utmost importance, which is also challenging given the heterogeneity in data
distribution and device properties. Existing works have proposed different
client-selection methods that consider fairness; however, they fail to select
clients with high utilities while simultaneously achieving fair accuracy
levels. In this paper, we propose a fair client-selection approach that unlocks
threefold fairness in federated learning. In addition to having a fair
client-selection strategy, we enforce an equitable number of rounds for client
participation and ensure a fair accuracy distribution over the clients. The
experimental results demonstrate that FedFair^3, in comparison to the
state-of-the-art baselines, achieves 18.15% less accuracy variance on the IID
data and 54.78% on the non-IID data, without decreasing the global accuracy.
Furthermore, it shows 24.36% less wall-clock training time on average.
Related papers
- FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation [3.156133122658662]
Federated Learning (FL) enables collaborative model training across multiple clients without sharing clients' private data.<n>Heterogeneous data distributions across clients may lead to models that are fairer for some clients than others.<n>We introduce FeDa4Fair, a library to generate datasets tailored to evaluating fair FL methods under heterogeneous client bias.
arXiv Detail & Related papers (2025-06-26T08:43:12Z) - PA-CFL: Privacy-Adaptive Clustered Federated Learning for Transformer-Based Sales Forecasting on Heterogeneous Retail Data [47.745068077169954]
Federated learning (FL) enables retailers to share model parameters for demand forecasting while maintaining privacy.
We propose Privacy-Adaptive Clustered Federated Learning (PA-CFL) tailored for demand forecasting on heterogeneous retail data.
arXiv Detail & Related papers (2025-03-15T18:07:54Z) - Achieving Fairness Across Local and Global Models in Federated Learning [9.902848777262918]
This study introduces textttEquiFL, a novel approach designed to enhance both local and global fairness in Federated Learning environments.
textttEquiFL incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness.
We demonstrate that textttEquiFL not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness.
arXiv Detail & Related papers (2024-06-24T19:42:16Z) - Towards Fairness in Provably Communication-Efficient Federated Recommender Systems [8.215115151660958]
In this study, we establish sample bounds that dictate the ideal number of clients required for improved communication efficiency.
In line with theoretical findings, we empirically demonstrate that RS-FairFRS reduces communication cost.
While random sampling improves communication efficiency, we propose a novel two-phase dual-fair update technique to achieve fairness without revealing protected attributes of active clients participating in training.
arXiv Detail & Related papers (2024-05-03T01:53:17Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Learning Under Intermittent Client Availability and
Time-Varying Communication Constraints [29.897785907692644]
Federated learning systems operate in settings with intermittent client availability and/or time-varying communication constraints.
We propose F3AST, an unbiased algorithm that learns an availability-dependent client selection strategy.
We show up to 186% and 8% accuracy improvements over FedAvg, and 8% and 7% over FedAdam on CIFAR100 and Shakespeare, respectively.
arXiv Detail & Related papers (2022-05-13T16:08:58Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.