LoGoFair: Post-Processing for Local and Global Fairness in Federated Learning
- URL: http://arxiv.org/abs/2503.17231v1
- Date: Fri, 21 Mar 2025 15:33:09 GMT
- Title: LoGoFair: Post-Processing for Local and Global Fairness in Federated Learning
- Authors: Li Zhang, Chaochao Chen, Zhongxuan Han, Qiyong Zhong, Xiaolin Zheng,
- Abstract summary: This paper proposes a novel post-processing framework for achieving both Local and Global Fairness in the FL context, namely LoGoFair.<n> Experimental results on three real-world datasets further illustrate the effectiveness of the proposed LoGoFair framework.
- Score: 20.12470856622916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has garnered considerable interest for its capability to learn from decentralized data sources. Given the increasing application of FL in decision-making scenarios, addressing fairness issues across different sensitive groups (e.g., female, male) in FL is crucial. Current research often focuses on facilitating fairness at each client's data (local fairness) or within the entire dataset across all clients (global fairness). However, existing approaches that focus exclusively on either local or global fairness fail to address two key challenges: (\textbf{CH1}) Under statistical heterogeneity, global fairness does not imply local fairness, and vice versa. (\textbf{CH2}) Achieving fairness under model-agnostic setting. To tackle the aforementioned challenges, this paper proposes a novel post-processing framework for achieving both Local and Global Fairness in the FL context, namely LoGoFair. To address CH1, LoGoFair endeavors to seek the Bayes optimal classifier under local and global fairness constraints, which strikes the optimal accuracy-fairness balance in the probabilistic sense. To address CH2, LoGoFair employs a model-agnostic federated post-processing procedure that enables clients to collaboratively optimize global fairness while ensuring local fairness, thereby achieving the optimal fair classifier within FL. Experimental results on three real-world datasets further illustrate the effectiveness of the proposed LoGoFair framework.
Related papers
- The Cost of Local and Global Fairness in Federated Learning [4.088196820932921]
Two concepts of fairness are important in Federated Learning (FL)
This paper proposes a framework that investigates the minimum accuracy lost for enforcing a specified level of global and local fairness in multi-class FL settings.
arXiv Detail & Related papers (2025-03-27T18:37:54Z) - WassFFed: Wasserstein Fair Federated Learning [31.135784690264888]
Federated Learning (FL) employs a training approach to address scenarios where users' data cannot be shared across clients.
We propose a Wasserstein Fair Federated Learning framework, namely WassFFed.
arXiv Detail & Related papers (2024-11-11T11:26:22Z) - Achieving Fairness Across Local and Global Models in Federated Learning [9.902848777262918]
This study introduces textttEquiFL, a novel approach designed to enhance both local and global fairness in Federated Learning environments.
textttEquiFL incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness.
We demonstrate that textttEquiFL not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness.
arXiv Detail & Related papers (2024-06-24T19:42:16Z) - Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization [81.32266996009575]
In federated learning (FL), the multi-step update and data heterogeneity among clients often lead to a loss landscape with sharper minima.
We propose FedLESAM, a novel algorithm that locally estimates the direction of global perturbation on client side.
arXiv Detail & Related papers (2024-05-29T08:46:21Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - GLOCALFAIR: Jointly Improving Global and Local Group Fairness in Federated Learning [8.033939709734451]
Federated learning (FL) has emerged as a prospective solution for collaboratively learning a shared model across clients without sacrificing their data privacy.
FL tends to be biased against certain demographic groups due to the inherent FL properties, such as data heterogeneity and party selection.
We propose GFAIR, a client-server codesign that can improve global and local group fairness without the need for sensitive statistics about the client's private datasets.
arXiv Detail & Related papers (2024-01-07T18:10:14Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Demystifying Local and Global Fairness Trade-offs in Federated Learning
Using Partial Information Decomposition [7.918307236588161]
This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL)
We identify three sources of unfairness in FL, namely, $textitUnique Disparity$, $textitRedundant Disparity$, and $textitMasked Disparity$.
We derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree.
arXiv Detail & Related papers (2023-07-21T03:41:55Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.