Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
- URL: http://arxiv.org/abs/2411.05874v2
- Date: Thu, 10 Apr 2025 10:21:56 GMT
- Title: Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
- Authors: Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, Adam Hilbert,
- Abstract summary: federated learning (FL) and explainable artificial intelligence (XAI) could allow training models from distributed data and explain their inner workings while preserving essential aspects of privacy.<n>This scoping review maps the publications that jointly deal with FL and XAI, focusing on publications that reported an interplay between FL and model interpretability or post-hoc explanations.
- Score: 1.6252563723817934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The joint implementation of federated learning (FL) and explainable artificial intelligence (XAI) could allow training models from distributed data and explaining their inner workings while preserving essential aspects of privacy. Toward establishing the benefits and tensions associated with their interplay, this scoping review maps the publications that jointly deal with FL and XAI, focusing on publications that reported an interplay between FL and model interpretability or post-hoc explanations. Out of the 37 studies meeting our criteria, only one explicitly and quantitatively analyzed the influence of FL on model explanations, revealing a significant research gap. The aggregation of interpretability metrics across FL nodes created generalized global insights at the expense of node-specific patterns being diluted. Several studies proposed FL algorithms incorporating explanation methods to safeguard the learning process against defaulting or malicious nodes. Studies using established FL libraries or following reporting guidelines are a minority. More quantitative research and structured, transparent practices are needed to fully understand their mutual impact and under which conditions it happens.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Towards Interpretable Federated Learning [19.764172768506132]
Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.
It is important to balance the need for performance, privacy-preservation and interpretability, especially in mission critical applications such as finance and healthcare.
We conduct comprehensive analysis of the representative IFL approaches, the commonly adopted performance evaluation metrics, and promising directions towards building versatile IFL techniques.
arXiv Detail & Related papers (2023-02-27T02:06:18Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - Knowledge Distillation for Federated Learning: a Practical Guide [12.232339022986931]
Federated Learning (FL) enables the training of Deep Learning models without centrally collecting possibly sensitive raw data.
Most used algorithms for FL are parameter-averaging based schemes (e.g., Federated Averaging) that have well known limits.
Regular Knowledge Distillation (KD) can solve or mitigate the weaknesses of parameter-averaging FL algorithms.
arXiv Detail & Related papers (2022-11-09T08:31:23Z) - Towards Verifiable Federated Learning [15.758657927386263]
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
Due to the nature of open participation by self-interested entities, FL needs to guard against potential misbehaviours by legitimate FL participants.
Verifiable federated learning has become an emerging topic of research that has attracted significant interest from the academia and the industry alike.
arXiv Detail & Related papers (2022-02-15T09:52:25Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Perturbation Theory for the Information Bottleneck [6.117084972237769]
Information bottleneck (IB) method formalizes extracting relevant information from data.
nonlinearity of the IB problem makes it computationally expensive and analytically intractable in general.
We derive a perturbation theory for the IB method and report the first complete characterization of the learning onset.
arXiv Detail & Related papers (2021-05-28T16:59:01Z) - Failure Prediction in Production Line Based on Federated Learning: An
Empirical Study [8.574488051650123]
Federated learning (FL) enables multiple participants to build a learning model without sharing data.
This paper presents the results of an empirical study on failure prediction in the production line based on FL.
arXiv Detail & Related papers (2021-01-25T10:27:19Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.