Evaluating the Communication Efficiency in Federated Learning Algorithms
- URL: http://arxiv.org/abs/2004.02738v1
- Date: Mon, 6 Apr 2020 15:31:54 GMT
- Title: Evaluating the Communication Efficiency in Federated Learning Algorithms
- Authors: Muhammad Asad, Ahmed Moustafa, Takayuki Ito and Muhammad Aslam
- Abstract summary: Recently, in light of new privacy legislations in many countries, the concept of Federated Learning (FL) has been introduced.
In FL, mobile users are empowered to learn a global model by aggregating their local models, without sharing the privacy-sensitive data.
This raises the challenge of communication cost when implementing FL at large scale.
- Score: 3.713348568329249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the era of advanced technologies, mobile devices are equipped with
computing and sensing capabilities that gather excessive amounts of data. These
amounts of data are suitable for training different learning models. Cooperated
with advancements in Deep Learning (DL), these learning models empower numerous
useful applications, e.g., image processing, speech recognition, healthcare,
vehicular network and many more. Traditionally, Machine Learning (ML)
approaches require data to be centralised in cloud-based data-centres. However,
this data is often large in quantity and privacy-sensitive which prevents
logging into these data-centres for training the learning models. In turn, this
results in critical issues of high latency and communication inefficiency.
Recently, in light of new privacy legislations in many countries, the concept
of Federated Learning (FL) has been introduced. In FL, mobile users are
empowered to learn a global model by aggregating their local models, without
sharing the privacy-sensitive data. Usually, these mobile users have slow
network connections to the data-centre where the global model is maintained.
Moreover, in a complex and large scale network, heterogeneous devices that have
various energy constraints are involved. This raises the challenge of
communication cost when implementing FL at large scale. To this end, in this
research, we begin with the fundamentals of FL, and then, we highlight the
recent FL algorithms and evaluate their communication efficiency with detailed
comparisons. Furthermore, we propose a set of solutions to alleviate the
existing FL problems both from communication perspective and privacy
perspective.
Related papers
- Federated Learning for 6G: Paradigms, Taxonomy, Recent Advances and
Insights [52.024964564408]
This paper examines the added-value of implementing Federated Learning throughout all levels of the protocol stack.
It presents important FL applications, addresses hot topics, provides valuable insights and explicits guidance for future research and developments.
Our concluding remarks aim to leverage the synergy between FL and future 6G, while highlighting FL's potential to revolutionize wireless industry.
arXiv Detail & Related papers (2023-12-07T20:39:57Z) - Federated Learning: A Cutting-Edge Survey of the Latest Advancements and Applications [6.042202852003457]
Federated learning (FL) is a technique for developing robust machine learning (ML) models.
To protect user privacy, FL requires users to send model updates rather than transmitting large quantities of raw and potentially confidential data.
This survey provides a comprehensive analysis and comparison of the most recent FL algorithms.
arXiv Detail & Related papers (2023-10-08T19:54:26Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - Evaluation and comparison of federated learning algorithms for Human
Activity Recognition on smartphones [0.5039813366558306]
Federated Learning (FL) has been introduced as a new machine learning paradigm enhancing the use of local devices.
In this paper, we propose a new FL algorithm, termed FedDist, which can modify models during training by identifying dissimilarities between neurons among the clients.
Results have shown the ability of FedDist to adapt to heterogeneous data and the capability of FL to deal with asynchronous situations.
arXiv Detail & Related papers (2022-10-30T18:47:23Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Edge-Native Intelligence for 6G Communications Driven by Federated
Learning: A Survey of Trends and Challenges [14.008159759350264]
A new technique, coined as federated learning (FL), arose to bring machine learning to the edge of wireless networks.
FL exploits both decentralised datasets and computing resources of participating clients to develop a generalised ML model without compromising data privacy.
The purpose of this survey is to provide an overview of the state-of-the-art of FL applications in key wireless technologies.
arXiv Detail & Related papers (2021-11-14T17:13:34Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - Federated Learning for Resource-Constrained IoT Devices: Panoramas and
State-of-the-art [12.129978716326676]
We introduce some recently implemented real-life applications of Federated Learning.
In large-scale networks, there may be clients with varying computational resource capabilities.
We highlight future directions in the FL area concerning resource-constrained devices.
arXiv Detail & Related papers (2020-02-25T01:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.