A Systematic Literature Review on Federated Learning: From A Model
Quality Perspective
- URL: http://arxiv.org/abs/2012.01973v1
- Date: Tue, 1 Dec 2020 05:48:36 GMT
- Title: A Systematic Literature Review on Federated Learning: From A Model
Quality Perspective
- Authors: Yi Liu, Li Zhang, Ning Ge, Guanghao Li
- Abstract summary: Federated Learning (FL) can jointly train a global model with the data remaining locally.
This paper systematically reviews and objectively analyzes the approaches to improving the quality of FL models.
- Score: 10.725466627592732
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: As an emerging technique, Federated Learning (FL) can jointly train a global
model with the data remaining locally, which effectively solves the problem of
data privacy protection through the encryption mechanism. The clients train
their local model, and the server aggregates models until convergence. In this
process, the server uses an incentive mechanism to encourage clients to
contribute high-quality and large-volume data to improve the global model.
Although some works have applied FL to the Internet of Things (IoT), medicine,
manufacturing, etc., the application of FL is still in its infancy, and many
related issues need to be solved. Improving the quality of FL models is one of
the current research hotspots and challenging tasks. This paper systematically
reviews and objectively analyzes the approaches to improving the quality of FL
models. We are also interested in the research and application trends of FL and
the effect comparison between FL and non-FL because the practitioners usually
worry that achieving privacy protection needs compromising learning quality. We
use a systematic review method to analyze 147 latest articles related to FL.
This review provides useful information and insights to both academia and
practitioners from the industry. We investigate research questions about
academic research and industrial application trends of FL, essential factors
affecting the quality of FL models, and compare FL and non-FL algorithms in
terms of learning quality. Based on our review's conclusion, we give some
suggestions for improving the FL model quality. Finally, we propose an FL
application framework for practitioners.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Keep It Simple: Fault Tolerance Evaluation of Federated Learning with
Unreliable Clients [0.28939699256527274]
Federated learning (FL) enables decentralized model training across multiple devices without exposing their local training data.
We show that simple FL algorithms can perform surprisingly well in the presence of unreliable clients.
arXiv Detail & Related papers (2023-05-16T23:55:47Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - Introducing Federated Learning into Internet of Things ecosystems --
preliminary considerations [0.31402652384742363]
Federated learning (FL) was proposed to facilitate the training of models in a distributed environment.
It supports the protection of (local) data privacy and uses local resources for model training.
arXiv Detail & Related papers (2022-07-15T18:48:57Z) - Federated Learning: Applications, Challenges and Future Scopes [1.3190581566723918]
Federated learning (FL) is a system in which a central aggregator coordinates the efforts of multiple clients to solve machine learning problems.
FL has applications in wireless communication, service recommendation, intelligent medical diagnosis systems, and healthcare.
arXiv Detail & Related papers (2022-05-18T10:47:09Z) - A Multi-agent Reinforcement Learning Approach for Efficient Client
Selection in Federated Learning [17.55163940659976]
Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model.
We design an efficient FL framework which jointly optimize model accuracy, processing latency and communication efficiency.
Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.
arXiv Detail & Related papers (2022-01-09T05:55:17Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.