Federated Fairness Analytics: Quantifying Fairness in Federated Learning
- URL: http://arxiv.org/abs/2408.08214v1
- Date: Thu, 15 Aug 2024 15:23:32 GMT
- Title: Federated Fairness Analytics: Quantifying Fairness in Federated Learning
- Authors: Oscar Dilley, Juan Marcelo Parra-Ullauri, Rasheed Hussain, Dimitra Simeonidou,
- Abstract summary: Federated Learning (FL) is a privacy-enhancing technology for distributed ML.
FL inherits fairness challenges from classical ML and introduces new ones.
We propose Federated Fairness Analytics - a methodology for measuring fairness.
- Score: 2.9674793945631097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a privacy-enhancing technology for distributed ML. By training models locally and aggregating updates - a federation learns together, while bypassing centralised data collection. FL is increasingly popular in healthcare, finance and personal computing. However, it inherits fairness challenges from classical ML and introduces new ones, resulting from differences in data quality, client participation, communication constraints, aggregation methods and underlying hardware. Fairness remains an unresolved issue in FL and the community has identified an absence of succinct definitions and metrics to quantify fairness; to address this, we propose Federated Fairness Analytics - a methodology for measuring fairness. Our definition of fairness comprises four notions with novel, corresponding metrics. They are symptomatically defined and leverage techniques originating from XAI, cooperative game-theory and networking engineering. We tested a range of experimental settings, varying the FL approach, ML task and data settings. The results show that statistical heterogeneity and client participation affect fairness and fairness conscious approaches such as Ditto and q-FedAvg marginally improve fairness-performance trade-offs. Using our techniques, FL practitioners can uncover previously unobtainable insights into their system's fairness, at differing levels of granularity in order to address fairness challenges in FL. We have open-sourced our work at: https://github.com/oscardilley/federated-fairness.
Related papers
- Accurate Forgetting for Heterogeneous Federated Continual Learning [89.08735771893608]
We propose a new concept accurate forgetting (AF) and develop a novel generative-replay methodMethodwhich selectively utilizes previous knowledge in federated networks.
We employ a probabilistic framework based on a normalizing flow model to quantify the credibility of previous knowledge.
arXiv Detail & Related papers (2025-02-20T02:35:17Z) - AFed: Algorithmic Fair Federated Learning [13.216737333440596]
Federated Learning (FL) has gained significant attention as it facilitates collaborative machine learning among multiple clients without centralizing their data on a server.
Traditional debiasing methods assume centralized access to sensitive information, rendering them impractical for the FL setting.
This paper presents AFed, a framework for promoting group fairness in FL without access to client local data.
arXiv Detail & Related papers (2025-01-06T03:05:49Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Demystifying Local and Global Fairness Trade-offs in Federated Learning
Using Partial Information Decomposition [7.918307236588161]
This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL)
We identify three sources of unfairness in FL, namely, $textitUnique Disparity$, $textitRedundant Disparity$, and $textitMasked Disparity$.
We derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree.
arXiv Detail & Related papers (2023-07-21T03:41:55Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Proportional Fairness in Federated Learning [27.086313029073683]
PropFair is a novel and easy-to-implement algorithm for finding proportionally fair solutions in federated learning.
We demonstrate that PropFair can approximately find PF solutions, and it achieves a good balance between the average performances of all clients and of the worst 10% clients.
arXiv Detail & Related papers (2022-02-03T16:28:04Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.