Demystifying Local and Global Fairness Trade-offs in Federated Learning
Using Partial Information Decomposition
- URL: http://arxiv.org/abs/2307.11333v2
- Date: Mon, 4 Mar 2024 22:56:09 GMT
- Title: Demystifying Local and Global Fairness Trade-offs in Federated Learning
Using Partial Information Decomposition
- Authors: Faisal Hamman, Sanghamitra Dutta
- Abstract summary: This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL)
We identify three sources of unfairness in FL, namely, $textitUnique Disparity$, $textitRedundant Disparity$, and $textitMasked Disparity$.
We derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree.
- Score: 7.918307236588161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents an information-theoretic perspective to group fairness
trade-offs in federated learning (FL) with respect to sensitive attributes,
such as gender, race, etc. Existing works often focus on either $\textit{global
fairness}$ (overall disparity of the model across all clients) or
$\textit{local fairness}$ (disparity of the model at each client), without
always considering their trade-offs. There is a lack of understanding regarding
the interplay between global and local fairness in FL, particularly under data
heterogeneity, and if and when one implies the other. To address this gap, we
leverage a body of work in information theory called partial information
decomposition (PID), which first identifies three sources of unfairness in FL,
namely, $\textit{Unique Disparity}$, $\textit{Redundant Disparity}$, and
$\textit{Masked Disparity}$. We demonstrate how these three disparities
contribute to global and local fairness using canonical examples. This
decomposition helps us derive fundamental limits on the trade-off between
global and local fairness, highlighting where they agree or disagree. We
introduce the $\textit{Accuracy and Global-Local Fairness Optimality Problem
(AGLFOP)}$, a convex optimization that defines the theoretical limits of
accuracy and fairness trade-offs, identifying the best possible performance any
FL strategy can attain given a dataset and client distribution. We also present
experimental results on synthetic datasets and the ADULT dataset to support our
theoretical findings.
Related papers
- WassFFed: Wasserstein Fair Federated Learning [31.135784690264888]
Federated Learning (FL) employs a training approach to address scenarios where users' data cannot be shared across clients.
We propose a Wasserstein Fair Federated Learning framework, namely WassFFed.
arXiv Detail & Related papers (2024-11-11T11:26:22Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Federated Fairness Analytics: Quantifying Fairness in Federated Learning [2.9674793945631097]
Federated Learning (FL) is a privacy-enhancing technology for distributed ML.
FL inherits fairness challenges from classical ML and introduces new ones.
We propose Federated Fairness Analytics - a methodology for measuring fairness.
arXiv Detail & Related papers (2024-08-15T15:23:32Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - GLOCALFAIR: Jointly Improving Global and Local Group Fairness in Federated Learning [8.033939709734451]
Federated learning (FL) has emerged as a prospective solution for collaboratively learning a shared model across clients without sacrificing their data privacy.
FL tends to be biased against certain demographic groups due to the inherent FL properties, such as data heterogeneity and party selection.
We propose GFAIR, a client-server codesign that can improve global and local group fairness without the need for sensitive statistics about the client's private datasets.
arXiv Detail & Related papers (2024-01-07T18:10:14Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - FOCUS: Fairness via Agent-Awareness for Federated Learning on
Heterogeneous Data [31.611582207768464]
Federated learning (FL) allows agents to jointly train a global model without sharing their local data.
We propose a formal FL fairness definition, fairness via agent-awareness (FAA), which takes different contributions of heterogeneous agents into account.
We also propose a fair FL training algorithm based on agent clustering (FOCUS) to achieve fairness in FL measured by FAA.
arXiv Detail & Related papers (2022-07-21T02:21:03Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.