FOCUS: Fairness via Agent-Awareness for Federated Learning on
Heterogeneous Data
- URL: http://arxiv.org/abs/2207.10265v4
- Date: Thu, 16 Nov 2023 02:37:34 GMT
- Title: FOCUS: Fairness via Agent-Awareness for Federated Learning on
Heterogeneous Data
- Authors: Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Arash Nourian,
Han Zhao, Bo Li
- Abstract summary: Federated learning (FL) allows agents to jointly train a global model without sharing their local data.
We propose a formal FL fairness definition, fairness via agent-awareness (FAA), which takes different contributions of heterogeneous agents into account.
We also propose a fair FL training algorithm based on agent clustering (FOCUS) to achieve fairness in FL measured by FAA.
- Score: 31.611582207768464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) allows agents to jointly train a global model without
sharing their local data. However, due to the heterogeneous nature of local
data, it is challenging to optimize or even define fairness of the trained
global model for the agents. For instance, existing work usually considers
accuracy equity as fairness for different agents in FL, which is limited,
especially under the heterogeneous setting, since it is intuitively "unfair" to
enforce agents with high-quality data to achieve similar accuracy to those who
contribute low-quality data, which may discourage the agents from participating
in FL. In this work, we propose a formal FL fairness definition, fairness via
agent-awareness (FAA), which takes different contributions of heterogeneous
agents into account. Under FAA, the performance of agents with high-quality
data will not be sacrificed just due to the existence of large amounts of
agents with low-quality data. In addition, we propose a fair FL training
algorithm based on agent clustering (FOCUS) to achieve fairness in FL measured
by FAA. Theoretically, we prove the convergence and optimality of FOCUS under
mild conditions for linear and general convex loss functions with bounded
smoothness. We also prove that FOCUS always achieves higher fairness in terms
of FAA compared with standard FedAvg under both linear and general convex loss
functions. Empirically, we show that on four FL datasets, including synthetic
data, images, and texts, FOCUS achieves significantly higher fairness in terms
of FAA while maintaining competitive prediction accuracy compared with FedAvg
and state-of-the-art fair FL algorithms.
Related papers
- WassFFed: Wasserstein Fair Federated Learning [31.135784690264888]
Federated Learning (FL) employs a training approach to address scenarios where users' data cannot be shared across clients.
We propose a Wasserstein Fair Federated Learning framework, namely WassFFed.
arXiv Detail & Related papers (2024-11-11T11:26:22Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Demystifying Local and Global Fairness Trade-offs in Federated Learning
Using Partial Information Decomposition [7.918307236588161]
This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL)
We identify three sources of unfairness in FL, namely, $textitUnique Disparity$, $textitRedundant Disparity$, and $textitMasked Disparity$.
We derive fundamental limits on the trade-off between global and local fairness, highlighting where they agree or disagree.
arXiv Detail & Related papers (2023-07-21T03:41:55Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z) - Federated Learning on Heterogeneous and Long-Tailed Data via Classifier
Re-Training with Federated Features [24.679535905451758]
Federated learning (FL) provides a privacy-preserving solution for distributed machine learning tasks.
One challenging problem that severely damages the performance of FL models is the co-occurrence of data heterogeneity and long-tail distribution.
We propose a novel privacy-preserving FL method for heterogeneous and long-tailed data via Federated Re-training with Federated Features (CReFF)
arXiv Detail & Related papers (2022-04-28T10:35:11Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedPrune: Towards Inclusive Federated Learning [1.308951527147782]
Federated learning (FL) is a distributed learning technique that trains a shared model over distributed data in a privacy-preserving manner.
We propose FedPrune; a system that tackles this challenge by pruning the global model for slow clients based on their device characteristics.
By using insights from Central Limit Theorem, FedPrune incorporates a new aggregation technique that achieves robust performance over non-IID data.
arXiv Detail & Related papers (2021-10-27T06:33:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.