Achieving Distributive Justice in Federated Learning via Uncertainty Quantification
- URL: http://arxiv.org/abs/2504.15924v1
- Date: Tue, 22 Apr 2025 14:07:56 GMT
- Title: Achieving Distributive Justice in Federated Learning via Uncertainty Quantification
- Authors: Alycia Carey, Xintao Wu,
- Abstract summary: UDJ-FL is a flexible learning framework that can achieve multiple distributive justice-based client-level fairness metrics.<n>We empirically show the ability of UDJ-FL to achieve all four defined distributive justice-based client-level fairness metrics.
- Score: 12.929357709840975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Client-level fairness metrics for federated learning are used to ensure that all clients in a federation either: a) have similar final performance on their local data distributions (i.e., client parity), or b) obtain final performance on their local data distributions relative to their contribution to the federated learning process (i.e., contribution fairness). While a handful of works that propose either client-parity or contribution-based fairness metrics ground their definitions and decisions in social theories of equality -- such as distributive justice -- most works arbitrarily choose what notion of fairness to align with which makes it difficult for practitioners to choose which fairness metric aligns best with their fairness ethics. In this work, we propose UDJ-FL (Uncertainty-based Distributive Justice for Federated Learning), a flexible federated learning framework that can achieve multiple distributive justice-based client-level fairness metrics. Namely, by utilizing techniques inspired by fair resource allocation, in conjunction with performing aleatoric uncertainty-based client weighing, our UDJ-FL framework is able to achieve egalitarian, utilitarian, Rawls' difference principle, or desert-based client-level fairness. We empirically show the ability of UDJ-FL to achieve all four defined distributive justice-based client-level fairness metrics in addition to providing fairness equivalent to (or surpassing) other popular fair federated learning works. Further, we provide justification for why aleatoric uncertainty weighing is necessary to the construction of our UDJ-FL framework as well as derive theoretical guarantees for the generalization bounds of UDJ-FL. Our code is publicly available at https://github.com/alycia-noel/UDJ-FL.
Related papers
- LoGoFair: Post-Processing for Local and Global Fairness in Federated Learning [20.12470856622916]
This paper proposes a novel post-processing framework for achieving both Local and Global Fairness in the FL context, namely LoGoFair.
Experimental results on three real-world datasets further illustrate the effectiveness of the proposed LoGoFair framework.
arXiv Detail & Related papers (2025-03-21T15:33:09Z) - pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning [17.879602968559198]
Federated learning algorithms aim to maximize clients' accuracy by training a model on their collective data.<n>Group fairness constraints can be incorporated into the objective function of the FL optimization problem.<n>We show that such an approach would lead to suboptimal classification accuracy in an FL setting with heterogeneous client distributions.
arXiv Detail & Related papers (2025-03-19T06:15:31Z) - Targeted Learning for Data Fairness [52.59573714151884]
We expand fairness inference by evaluating fairness in the data generating process itself.
We derive estimators demographic parity, equal opportunity, and conditional mutual information.
To validate our approach, we perform several simulations and apply our estimators to real data.
arXiv Detail & Related papers (2025-02-06T18:51:28Z) - FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in Federated Learning [46.30755524556465]
We present FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness.
We develop a submodel allocation module with a theoretical guarantee of fairness.
Experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy.
arXiv Detail & Related papers (2024-05-28T15:43:29Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Proportional Fairness in Federated Learning [27.086313029073683]
PropFair is a novel and easy-to-implement algorithm for finding proportionally fair solutions in federated learning.
We demonstrate that PropFair can approximately find PF solutions, and it achieves a good balance between the average performances of all clients and of the worst 10% clients.
arXiv Detail & Related papers (2022-02-03T16:28:04Z) - Collaborative Fairness in Federated Learning [24.7378023761443]
We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
arXiv Detail & Related papers (2020-08-27T14:39:09Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.