Towards Fairness-Aware Federated Learning
- URL: http://arxiv.org/abs/2111.01872v3
- Date: Mon, 3 Apr 2023 06:22:10 GMT
- Title: Towards Fairness-Aware Federated Learning
- Authors: Yuxin Shi, Han Yu, Cyril Leung
- Abstract summary: We propose a taxonomy of Fairness-Aware Federated Learning (FAFL) approaches covering major steps in Federated Learning.
We discuss the main metrics for experimentally evaluating the performance of FAFL approaches.
- Score: 19.73772410934193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in Federated Learning (FL) have brought large-scale
collaborative machine learning opportunities for massively distributed clients
with performance and data privacy guarantees. However, most current works focus
on the interest of the central controller in FL,and overlook the interests of
the FL clients. This may result in unfair treatment of clients that discourages
them from actively participating in the learning process and damages the
sustainability of the FL ecosystem. Therefore, the topic of ensuring fairness
in FL is attracting a great deal of research interest. In recent years, diverse
Fairness-Aware FL (FAFL) approaches have been proposed in an effort to achieve
fairness in FL from different perspectives. However, there is no comprehensive
survey that helps readers gain insight into this interdisciplinary field. This
paper aims to provide such a survey. By examining the fundamental and
simplifying assumptions, as well as the notions of fairness adopted by existing
literature in this field, we propose a taxonomy of FAFL approaches covering
major steps in FL, including client selection, optimization, contribution
evaluation and incentive distribution. In addition, we discuss the main metrics
for experimentally evaluating the performance of FAFL approaches, and suggest
promising future research directions towards FAFL.
Related papers
- Federated Fairness Analytics: Quantifying Fairness in Federated Learning [2.9674793945631097]
Federated Learning (FL) is a privacy-enhancing technology for distributed ML.
FL inherits fairness challenges from classical ML and introduces new ones.
We propose Federated Fairness Analytics - a methodology for measuring fairness.
arXiv Detail & Related papers (2024-08-15T15:23:32Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - FedCompetitors: Harmonious Collaboration in Federated Learning with
Competing Participants [41.070716405671206]
Federated learning (FL) provides a privacy-preserving approach for collaborative training of machine learning models.
It is crucial to select appropriate collaborators for each FL participant based on data complementarity.
It is imperative to consider the inter-individual relationships among FL-PTs where some FL-PTs engage in competition.
arXiv Detail & Related papers (2023-12-18T17:53:01Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Towards Understanding Generalization and Stability Gaps between Centralized and Decentralized Federated Learning [57.35402286842029]
We show that centralized learning always generalizes better than decentralized learning (DFL)
We also conduct experiments on several common setups in FL to validate that our theoretical analysis is consistent with experimental phenomena and contextually valid in several general and practical scenarios.
arXiv Detail & Related papers (2023-10-05T11:09:42Z) - Bayesian Federated Learning: A Survey [54.40136267717288]
Federated learning (FL) demonstrates its advantages in integrating distributed infrastructure, communication, computing and learning in a privacy-preserving manner.
The robustness and capabilities of existing FL methods are challenged by limited and dynamic data and conditions.
BFL has emerged as a promising approach to address these issues.
arXiv Detail & Related papers (2023-04-26T03:41:17Z) - Towards Interpretable Federated Learning [19.764172768506132]
Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.
It is important to balance the need for performance, privacy-preservation and interpretability, especially in mission critical applications such as finance and healthcare.
We conduct comprehensive analysis of the representative IFL approaches, the commonly adopted performance evaluation metrics, and promising directions towards building versatile IFL techniques.
arXiv Detail & Related papers (2023-02-27T02:06:18Z) - FederatedScope: A Comprehensive and Flexible Federated Learning Platform
via Message Passing [63.87056362712879]
We propose a novel and comprehensive federated learning platform, named FederatedScope, which is based on a message-oriented framework.
Compared to the procedural framework, the proposed message-oriented framework is more flexible to express heterogeneous message exchange.
We conduct a series of experiments on the provided easy-to-use and comprehensive FL benchmarks to validate the correctness and efficiency of FederatedScope.
arXiv Detail & Related papers (2022-04-11T11:24:21Z) - Towards Verifiable Federated Learning [15.758657927386263]
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
Due to the nature of open participation by self-interested entities, FL needs to guard against potential misbehaviours by legitimate FL participants.
Verifiable federated learning has become an emerging topic of research that has attracted significant interest from the academia and the industry alike.
arXiv Detail & Related papers (2022-02-15T09:52:25Z) - Proportional Fairness in Federated Learning [27.086313029073683]
PropFair is a novel and easy-to-implement algorithm for finding proportionally fair solutions in federated learning.
We demonstrate that PropFair can approximately find PF solutions, and it achieves a good balance between the average performances of all clients and of the worst 10% clients.
arXiv Detail & Related papers (2022-02-03T16:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.