Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges
- URL: http://arxiv.org/abs/2104.02151v1
- Date: Mon, 5 Apr 2021 20:57:56 GMT
- Title: Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges
- Authors: Mingzhe Chen, Deniz G\"und\"uz, Kaibin Huang, Walid Saad, Mehdi
Bennis, Aneta Vulgarakis Feljan, and H. Vincent Poor
- Abstract summary: Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
- Score: 170.35951727508225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The next-generation of wireless networks will enable many machine learning
(ML) tools and applications to efficiently analyze various types of data
collected by edge devices for inference, autonomy, and decision making
purposes. However, due to resource constraints, delay limitations, and privacy
challenges, edge devices cannot offload their entire collected datasets to a
cloud server for centrally training their ML models or inference purposes. To
overcome these challenges, distributed learning and inference techniques have
been proposed as a means to enable edge devices to collaboratively train ML
models without raw data exchanges, thus reducing the communication overhead and
latency as well as improving data privacy. However, deploying distributed
learning over wireless networks faces several challenges including the
uncertain wireless environment, limited wireless resources (e.g., transmit
power and radio spectrum), and hardware resources. This paper provides a
comprehensive study of how distributed learning can be efficiently and
effectively deployed over wireless edge networks. We present a detailed
overview of several emerging distributed learning paradigms, including
federated learning, federated distillation, distributed inference, and
multi-agent reinforcement learning. For each learning framework, we first
introduce the motivation for deploying it over wireless networks. Then, we
present a detailed literature review on the use of communication techniques for
its efficient deployment. We then introduce an illustrative example to show how
to optimize wireless networks to improve its performance. Finally, we introduce
future research opportunities. In a nutshell, this paper provides a holistic
set of guidelines on how to deploy a broad range of distributed learning
frameworks over real-world wireless communication networks.
Related papers
- Towards Scalable Wireless Federated Learning: Challenges and Solutions [40.68297639420033]
federated learning (FL) emerges as an effective distributed machine learning framework.
We discuss the challenges and solutions of achieving scalable wireless FL from the perspectives of both network design and resource orchestration.
arXiv Detail & Related papers (2023-10-08T08:55:03Z) - Revolutionizing Wireless Networks with Federated Learning: A
Comprehensive Review [0.0]
The article discusses the significance of Machine Learning in wireless communication.
It highlights Federated Learning (FL) as a novel approach that could play a vital role in future mobile networks, particularly 6G and beyond.
arXiv Detail & Related papers (2023-08-01T22:32:10Z) - Machine Learning for QoS Prediction in Vehicular Communication:
Challenges and Solution Approaches [46.52224306624461]
We consider maximum throughput prediction enhancing, for example, streaming or high-definition mapping applications.
We highlight how confidence can be built on machine learning technologies by better understanding the underlying characteristics of the collected data.
We use explainable AI to show that machine learning can learn underlying principles of wireless networks without being explicitly programmed.
arXiv Detail & Related papers (2023-02-23T12:29:20Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - In-Network Learning: Distributed Training and Inference in Networks [10.635097939284753]
We develop a learning algorithm and an architecture that make use of multiple data streams and processing units.
In particular, the analysis reveals how inference propagates and fuses across a network.
arXiv Detail & Related papers (2021-07-07T18:35:08Z) - Transfer Learning for Future Wireless Networks: A Comprehensive Survey [49.746711269488515]
This article aims to provide a comprehensive survey on applications of Transfer Learning in wireless networks.
We first provide an overview of TL including formal definitions, classification, and various types of TL techniques.
We then discuss diverse TL approaches proposed to address emerging issues in wireless networks.
arXiv Detail & Related papers (2021-02-15T14:19:55Z) - Wireless for Machine Learning [91.13476340719087]
We give an exhaustive review of the state-of-the-art wireless methods that are specifically designed to support machine learning services over distributed datasets.
There are two clear themes within the literature, analog over-the-air computation and digital radio resource management optimized for ML.
This survey gives a comprehensive introduction to these methods, reviews the most important works, highlights open problems, and discusses application scenarios.
arXiv Detail & Related papers (2020-08-31T11:09:49Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.