A Systematic Literature Review on Federated Machine Learning: From A
Software Engineering Perspective
- URL: http://arxiv.org/abs/2007.11354v9
- Date: Fri, 28 May 2021 04:54:59 GMT
- Title: A Systematic Literature Review on Federated Machine Learning: From A
Software Engineering Perspective
- Authors: Sin Kit Lo, Qinghua Lu, Chen Wang, Hye-Young Paik, Liming Zhu
- Abstract summary: Federated learning is an emerging machine learning paradigm where clients train models locally and formulate a global model based on the local model updates.
We perform a systematic literature review from a software engineering perspective, based on 231 primary studies.
Our data synthesis covers the lifecycle of federated learning system development that includes background understanding, requirement analysis, architecture design, implementation, and evaluation.
- Score: 9.315446698757768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an emerging machine learning paradigm where clients
train models locally and formulate a global model based on the local model
updates. To identify the state-of-the-art in federated learning and explore how
to develop federated learning systems, we perform a systematic literature
review from a software engineering perspective, based on 231 primary studies.
Our data synthesis covers the lifecycle of federated learning system
development that includes background understanding, requirement analysis,
architecture design, implementation, and evaluation. We highlight and summarise
the findings from the results, and identify future trends to encourage
researchers to advance their current work.
Related papers
- A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.
We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)
We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - Principles and Components of Federated Learning Architectures [0.8869563348631716]
Federated learning, also known as FL, is a machine learning framework in which a significant amount of clients collaborate to collaboratively train a model.
There are advantages in terms of privacy, security, regulations, and economy with this decentralized approach to model training.
arXiv Detail & Related papers (2025-02-07T19:09:03Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Exploring Machine Learning Models for Federated Learning: A Review of
Approaches, Performance, and Limitations [1.1060425537315088]
Federated learning is a distributed learning framework enhanced to preserve the privacy of individuals' data.
In times of crisis, when real-time decision-making is critical, federated learning allows multiple entities to work collectively without sharing sensitive data.
This paper is a systematic review of the literature on privacy-preserving machine learning in the last few years.
arXiv Detail & Related papers (2023-11-17T19:23:21Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - FLRA: A Reference Architecture for Federated Learning Systems [8.180947044673639]
Federated learning is an emerging machine learning paradigm that enables multiple devices to train models locally and formulate a global model, without sharing the clients' local data.
We propose FLRA, a reference architecture for federated learning systems, which provides a template design for federated learning-based solutions.
arXiv Detail & Related papers (2021-06-22T06:59:19Z) - Architectural Patterns for the Design of Federated Learning Systems [12.330671239159102]
Federated learning has received fast-growing interests from academia and industry to tackle the challenges of data hungriness and privacy in machine learning.
This paper presents a collection of architectural patterns to deal with the design challenges of federated learning systems.
arXiv Detail & Related papers (2021-01-07T05:11:09Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z) - Neural Entity Linking: A Survey of Models Based on Deep Learning [82.43751915717225]
This survey presents a comprehensive description of recent neural entity linking (EL) systems developed since 2015.
Its goal is to systemize design features of neural entity linking systems and compare their performance to the remarkable classic methods on common benchmarks.
The survey touches on applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models.
arXiv Detail & Related papers (2020-05-31T18:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.