Decentral and Incentivized Federated Learning Frameworks: A Systematic
Literature Review
- URL: http://arxiv.org/abs/2205.07855v2
- Date: Wed, 18 May 2022 11:52:21 GMT
- Title: Decentral and Incentivized Federated Learning Frameworks: A Systematic
Literature Review
- Authors: Leon Witt, Mathis Heyer, Kentaroh Toyoda, Wojciech Samek and Dan Li
- Abstract summary: This is the first systematic literature review analyzing holistic FLFs in the domain of both, decentralized and incentivized federated learning.
Although having massive potential to direct the future of a more distributed and secure AI, none of the analyzed FLF is production-ready.
- Score: 13.544807934973168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of Federated Learning (FL) has ignited a new paradigm for parallel
and confidential decentralized Machine Learning (ML) with the potential of
utilizing the computational power of a vast number of IoT, mobile and edge
devices without data leaving the respective device, ensuring privacy by design.
Yet, in order to scale this new paradigm beyond small groups of already
entrusted entities towards mass adoption, the Federated Learning Framework
(FLF) has to become (i) truly decentralized and (ii) participants have to be
incentivized. This is the first systematic literature review analyzing holistic
FLFs in the domain of both, decentralized and incentivized federated learning.
422 publications were retrieved, by querying 12 major scientific databases.
Finally, 40 articles remained after a systematic review and filtering process
for in-depth examination. Although having massive potential to direct the
future of a more distributed and secure AI, none of the analyzed FLF is
production-ready. The approaches vary heavily in terms of use-cases, system
design, solved issues and thoroughness. We are the first to provide a
systematic approach to classify and quantify differences between FLF, exposing
limitations of current works and derive future directions for research in this
novel domain.
Related papers
- One-shot Federated Learning Methods: A Practical Guide [23.737787001337082]
One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round.
This paper presents a systematic analysis of the challenges faced by OFL and thoroughly reviews the current methods.
arXiv Detail & Related papers (2025-02-13T09:26:44Z) - SoK: Challenges and Opportunities in Federated Unlearning [32.0365189539138]
This SoK paper aims to take a deep look at the emphfederated unlearning literature, with the goal of identifying research trends and challenges in this emerging field.
arXiv Detail & Related papers (2024-03-04T19:35:08Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Decentralized Federated Learning: Fundamentals, State of the Art,
Frameworks, Trends, and Challenges [0.0]
Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data.
Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation.
This article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators.
arXiv Detail & Related papers (2022-11-15T18:51:20Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - VeriFi: Towards Verifiable Federated Unlearning [59.169431326438676]
Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
arXiv Detail & Related papers (2022-05-25T12:20:02Z) - SSFL: Tackling Label Deficiency in Federated Learning via Personalized
Self-Supervision [34.38856587032084]
Federated Learning (FL) is transforming the ML training ecosystem from a centralized over-the-cloud setting to distributed training over edge devices.
We propose self-supervised federated learning (SSFL), a unified self-supervised and personalized federated learning framework.
We show that the gap of evaluation accuracy between supervised learning and unsupervised learning in FL is both small and reasonable.
arXiv Detail & Related papers (2021-10-06T02:58:45Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.