Towards Interpretable Federated Learning
- URL: http://arxiv.org/abs/2302.13473v1
- Date: Mon, 27 Feb 2023 02:06:18 GMT
- Title: Towards Interpretable Federated Learning
- Authors: Anran Li, Rui Liu, Ming Hu, Luu Anh Tuan, Han Yu
- Abstract summary: Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.
It is important to balance the need for performance, privacy-preservation and interpretability, especially in mission critical applications such as finance and healthcare.
We conduct comprehensive analysis of the representative IFL approaches, the commonly adopted performance evaluation metrics, and promising directions towards building versatile IFL techniques.
- Score: 19.764172768506132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables multiple data owners to build machine
learning models collaboratively without exposing their private local data. In
order for FL to achieve widespread adoption, it is important to balance the
need for performance, privacy-preservation and interpretability, especially in
mission critical applications such as finance and healthcare. Thus,
interpretable federated learning (IFL) has become an emerging topic of research
attracting significant interest from the academia and the industry alike. Its
interdisciplinary nature can be challenging for new researchers to pick up. In
this paper, we bridge this gap by providing (to the best of our knowledge) the
first survey on IFL. We propose a unique IFL taxonomy which covers relevant
works enabling FL models to explain the prediction results, support model
debugging, and provide insights into the contributions made by individual data
owners or data samples, which in turn, is crucial for allocating rewards fairly
to motivate active and reliable participation in FL. We conduct comprehensive
analysis of the representative IFL approaches, the commonly adopted performance
evaluation metrics, and promising directions towards building versatile IFL
techniques.
Related papers
- Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - Personalized Federated Learning for Statistical Heterogeneity [0.021756081703276]
The popularity of federated learning (FL) is on the rise, along with growing concerns about data privacy in artificial intelligence applications.
This paper offers a brief summary of the current research progress in the field of personalized federated learning (PFL)
arXiv Detail & Related papers (2024-02-07T12:28:52Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Evaluating and Incentivizing Diverse Data Contributions in Collaborative
Learning [89.21177894013225]
For a federated learning model to perform well, it is crucial to have a diverse and representative dataset.
We show that the statistical criterion used to quantify the diversity of the data, as well as the choice of the federated learning algorithm used, has a significant effect on the resulting equilibrium.
We leverage this to design simple optimal federated learning mechanisms that encourage data collectors to contribute data representative of the global population.
arXiv Detail & Related papers (2023-06-08T23:38:25Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - Towards Verifiable Federated Learning [15.758657927386263]
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
Due to the nature of open participation by self-interested entities, FL needs to guard against potential misbehaviours by legitimate FL participants.
Verifiable federated learning has become an emerging topic of research that has attracted significant interest from the academia and the industry alike.
arXiv Detail & Related papers (2022-02-15T09:52:25Z) - Incentive Mechanisms for Federated Learning: From Economic and Game
Theoretic Perspective [42.50367925564069]
Federated learning (FL) has shown great potentials in training large-scale machine learning (ML) models without exposing the owners' raw data.
In FL, the data owners can train ML models based on their local data and only send the model updates rather than raw data to the model owner for aggregation.
To improve learning performance in terms of model accuracy and training completion time, it is essential to recruit sufficient participants.
arXiv Detail & Related papers (2021-11-20T07:22:14Z) - FedNLP: A Research Platform for Federated Learning in Natural Language
Processing [55.01246123092445]
We present the FedNLP, a research platform for federated learning in NLP.
FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling.
Preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets.
arXiv Detail & Related papers (2021-04-18T11:04:49Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.