EFFL: Egalitarian Fairness in Federated Learning for Mitigating Matthew
Effect
- URL: http://arxiv.org/abs/2309.16338v1
- Date: Thu, 28 Sep 2023 10:51:12 GMT
- Title: EFFL: Egalitarian Fairness in Federated Learning for Mitigating Matthew
Effect
- Authors: Jiashi Gao, Changwu Huang, Ming Tang, Shin Hwei Tan, Xin Yao, Xuetao
Wei
- Abstract summary: We propose Egalitarian Fairness Federated Learning (EFFL) to mitigate the Matthew effect.
EFFL aims for performance optimality, minimizing the empirical risk loss and the bias for each client.
We show EFFL outperforms other state-of-the-art FL algorithms in achieving a high-performance global model.
- Score: 11.24699174877316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in federated learning (FL) enable collaborative training of
machine learning (ML) models from large-scale and widely dispersed clients
while protecting their privacy. However, when different clients' datasets are
heterogeneous, traditional FL mechanisms produce a global model that does not
adequately represent the poorer clients with limited data resources, resulting
in lower accuracy and higher bias on their local data. According to the Matthew
effect, which describes how the advantaged gain more advantage and the
disadvantaged lose more over time, deploying such a global model in client
applications may worsen the resource disparity among the clients and harm the
principles of social welfare and fairness. To mitigate the Matthew effect, we
propose Egalitarian Fairness Federated Learning (EFFL), where egalitarian
fairness refers to the global model learned from FL has: (1) equal accuracy
among clients; (2) equal decision bias among clients. Besides achieving
egalitarian fairness among the clients, EFFL also aims for performance
optimality, minimizing the empirical risk loss and the bias for each client;
both are essential for any ML model training, whether centralized or
decentralized. We formulate EFFL as a constrained multi-constrained
multi-objectives optimization (MCMOO) problem, with the decision bias and
egalitarian fairness as constraints and the minimization of the empirical risk
losses on all clients as multiple objectives to be optimized. We propose a
gradient-based three-stage algorithm to obtain the Pareto optimal solutions
within the constraint space. Extensive experiments demonstrate that EFFL
outperforms other state-of-the-art FL algorithms in achieving a
high-performance global model with enhanced egalitarian fairness among all
clients.
Related papers
- Federated Loss Exploration for Improved Convergence on Non-IID Data [20.979550470097823]
Federated Loss Exploration (FedLEx) is an innovative approach specifically designed to tackle these challenges.<n>FedLEx distinctively addresses the shortcomings of existing FL methods in non-IID settings.<n>Our experiments with state-of-the art FL algorithms demonstrate significant improvements in performance.
arXiv Detail & Related papers (2025-06-23T13:42:07Z) - FedMHO: Heterogeneous One-Shot Federated Learning Towards Resource-Constrained Edge Devices [12.08958206272527]
Federated Learning (FL) is increasingly adopted in edge computing scenarios, where a large number of heterogeneous clients operate under constrained or sufficient resources.
One-shot FL has emerged as a promising approach to mitigate communication overhead, and model-heterogeneous FL solves the problem of diverse computing resources across clients.
We propose a novel FL framework named FedMHO, which leverages deep classification models on resource-sufficient clients and lightweight generative models on resource-constrained devices.
arXiv Detail & Related papers (2025-02-12T15:54:56Z) - Feasible Learning [78.6167929413604]
We introduce Feasible Learning (FL), a sample-centric learning paradigm where models are trained by solving a feasibility problem that bounds the loss for each training sample.
Our empirical analysis, spanning image classification, age regression, and preference optimization in large language models, demonstrates that models trained via FL can learn from data while displaying improved tail behavior compared to ERM, with only a marginal impact on average performance.
arXiv Detail & Related papers (2025-01-24T20:39:38Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - FedMABA: Towards Fair Federated Learning through Multi-Armed Bandits Allocation [26.52731463877256]
In this paper, we introduce the concept of adversarial multi-armed bandit to optimize the proposed objective with explicit constraints on performance disparities.
Practically, we propose a novel multi-armed bandit-based allocation FL algorithm (FedMABA) to mitigate performance unfairness among diverse clients with different data distributions.
arXiv Detail & Related papers (2024-10-26T10:41:45Z) - On ADMM in Heterogeneous Federated Learning: Personalization, Robustness, and Fairness [16.595935469099306]
We propose FLAME, an optimization framework by utilizing the alternating direction method of multipliers (ADMM) to train personalized and global models.
Our theoretical analysis establishes the global convergence and two kinds of convergence rates for FLAME under mild assumptions.
Our experimental findings show that FLAME outperforms state-of-the-art methods in convergence and accuracy, and it achieves higher test accuracy under various attacks.
arXiv Detail & Related papers (2024-07-23T11:35:42Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Entropy-driven Fair and Effective Federated Learning [26.22014904183881]
Federated Learning (FL) enables collaborative model training across distributed devices while preserving data privacy.<n>We propose a novel that leverages Theoretical-based aggregation combined with model and gradient alignments to simultaneously optimize and global model performance.
arXiv Detail & Related papers (2023-01-29T10:02:42Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Fair and Consistent Federated Learning [48.19977689926562]
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
arXiv Detail & Related papers (2021-08-19T01:56:08Z) - Auto-weighted Robust Federated Learning with Corrupted Data Sources [7.475348174281237]
Federated learning provides a communication-efficient and privacy-preserving training process.
Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions.
We propose Auto-weighted Robust Federated Learning (arfl) to provide robustness against corrupted data sources.
arXiv Detail & Related papers (2021-01-14T21:54:55Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.