Fair and Consistent Federated Learning
- URL: http://arxiv.org/abs/2108.08435v1
- Date: Thu, 19 Aug 2021 01:56:08 GMT
- Title: Fair and Consistent Federated Learning
- Authors: Sen Cui, Weishen Pan, Jian Liang, Changshui Zhang, Fei Wang
- Abstract summary: Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
- Score: 48.19977689926562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has gain growing interests for its capability of
learning from distributed data sources collectively without the need of
accessing the raw data samples across different sources. So far FL research has
mostly focused on improving the performance, how the algorithmic disparity will
be impacted for the model learned from FL and the impact of algorithmic
disparity on the utility inconsistency are largely unexplored. In this paper,
we propose an FL framework to jointly consider performance consistency and
algorithmic fairness across different local clients (data sources). We derive
our framework from a constrained multi-objective optimization perspective, in
which we learn a model satisfying fairness constraints on all clients with
consistent performance. Specifically, we treat the algorithm prediction loss at
each local client as an objective and maximize the worst-performing client with
fairness constraints through optimizing a surrogate maximum function with all
objectives involved. A gradient-based procedure is employed to achieve the
Pareto optimality of this optimization problem. Theoretical analysis is
provided to prove that our method can converge to a Pareto solution that
achieves the min-max performance with fairness constraints on all clients.
Comprehensive experiments on synthetic and real-world datasets demonstrate the
superiority that our approach over baselines and its effectiveness in achieving
both fairness and consistency across all local clients.
Related papers
- FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Balancing Similarity and Complementarity for Federated Learning [91.65503655796603]
Federated Learning (FL) is increasingly important in mobile and IoT systems.
One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data.
We introduce a novel framework, textttFedSaC, which balances similarity and complementarity in FL cooperation.
arXiv Detail & Related papers (2024-05-16T08:16:19Z) - Anti-Matthew FL: Bridging the Performance Gap in Federated Learning to Counteract the Matthew Effect [4.716839088197377]
Federated learning (FL) facilitates model training across heterogeneous and diverse datasets.
In this work, we propose anti-Matthew fairness for the global model at the client level.
We show that our proposed anti-Matthew FL outperforms other state-of-the-art FL algorithms in achieving a high-performance global model.
arXiv Detail & Related papers (2023-09-28T10:51:12Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - A Fair Federated Learning Framework With Reinforcement Learning [23.675056844328]
Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
arXiv Detail & Related papers (2022-05-26T15:10:16Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.