Collaborative Fairness in Federated Learning
- URL: http://arxiv.org/abs/2008.12161v2
- Date: Fri, 28 Aug 2020 01:06:55 GMT
- Title: Collaborative Fairness in Federated Learning
- Authors: Lingjuan Lyu, Xinyi Xu, and Qian Wang
- Abstract summary: We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
- Score: 24.7378023761443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In current deep learning paradigms, local training or the Standalone
framework tends to result in overfitting and thus poor generalizability. This
problem can be addressed by Distributed or Federated Learning (FL) that
leverages a parameter server to aggregate model updates from individual
participants. However, most existing Distributed or FL frameworks have
overlooked an important aspect of participation: collaborative fairness. In
particular, all participants can receive the same or similar models, regardless
of their contributions. To address this issue, we investigate the collaborative
fairness in FL, and propose a novel Collaborative Fair Federated Learning
(CFFL) framework which utilizes reputation to enforce participants to converge
to different models, thus achieving fairness without compromising the
predictive performance. Extensive experiments on benchmark datasets demonstrate
that CFFL achieves high fairness, delivers comparable accuracy to the
Distributed framework, and outperforms the Standalone framework.
Related papers
- Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in Federated Learning [46.30755524556465]
We present FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness.
We develop a submodel allocation module with a theoretical guarantee of fairness.
Experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy.
arXiv Detail & Related papers (2024-05-28T15:43:29Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - Dynamic Fair Federated Learning Based on Reinforcement Learning [19.033986978896074]
Federated learning enables a collaborative training and optimization of global models among a group of devices without sharing local data samples.
We propose a dynamic q fairness federated learning algorithm with reinforcement learning, called DQFFL.
Our DQFFL outperforms the state-of-the-art methods in terms of overall performance, fairness and convergence speed.
arXiv Detail & Related papers (2023-11-02T03:05:40Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - A Reputation Mechanism Is All You Need: Collaborative Fairness and
Adversarial Robustness in Federated Learning [24.442595192268872]
Federated learning (FL) is an emerging practical framework for effective and scalable machine learning.
In conventional FL, all participants receive the global model (equal rewards), which might be unfair to the high-contributing participants.
We propose a novel RFFL framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism.
arXiv Detail & Related papers (2020-11-20T15:52:45Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.