Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks
- URL: http://arxiv.org/abs/2002.11045v1
- Date: Sat, 22 Feb 2020 14:38:11 GMT
- Title: Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks
- Authors: Changyang She and Rui Dong and Zhouyou Gu and Zhanwei Hou and Yonghui
Li and Wibowo Hardjawana and Chenyang Yang and Lingyang Song and Branka
Vucetic
- Abstract summary: We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
- Score: 84.2155885234293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the future 6th generation networks, ultra-reliable and low-latency
communications (URLLC) will lay the foundation for emerging mission-critical
applications that have stringent requirements on end-to-end delay and
reliability. Existing works on URLLC are mainly based on theoretical models and
assumptions. The model-based solutions provide useful insights, but cannot be
directly implemented in practice. In this article, we first summarize how to
apply data-driven supervised deep learning and deep reinforcement learning in
URLLC, and discuss some open problems of these methods. To address these open
problems, we develop a multi-level architecture that enables device
intelligence, edge intelligence, and cloud intelligence for URLLC. The basic
idea is to merge theoretical models and real-world data in analyzing the
latency and reliability and training deep neural networks (DNNs). Deep transfer
learning is adopted in the architecture to fine-tune the pre-trained DNNs in
non-stationary networks. Further considering that the computing capacity at
each user and each mobile edge computing server is limited, federated learning
is applied to improve the learning efficiency. Finally, we provide some
experimental and simulation results and discuss some future directions.
Related papers
- Mobile Traffic Prediction at the Edge through Distributed and Transfer
Learning [2.687861184973893]
The research in this topic concentrated on making predictions in a centralized fashion, by collecting data from the different network elements.
We propose a novel prediction framework based on edge computing which uses datasets obtained on the edge through a large measurement campaign.
arXiv Detail & Related papers (2023-10-22T23:48:13Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Communication-Efficient Separable Neural Network for Distributed
Inference on Edge Devices [2.28438857884398]
We propose a novel method of exploiting model parallelism to separate a neural network for distributed inferences.
Under proper specifications of devices and configurations of models, our experiments show that the inference of large neural networks on edge clusters can be distributed and accelerated.
arXiv Detail & Related papers (2021-11-03T19:30:28Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Learning Centric Wireless Resource Allocation for Edge Computing:
Algorithm and Experiment [15.577056429740951]
Edge intelligence is an emerging network architecture that integrates sensing, communication, computing components, and supports various machine learning applications.
Existing methods ignore two important facts: 1) different models have heterogeneous demands on training data; 2) there is a mismatch between the simulated environment and the real-world environment.
This paper proposes the learning centric wireless resource allocation scheme that maximizes the worst learning performance of multiple tasks.
arXiv Detail & Related papers (2020-10-29T06:20:40Z) - A Tutorial on Ultra-Reliable and Low-Latency Communications in 6G:
Integrating Domain Knowledge into Deep Learning [115.75967665222635]
Ultra-reliable and low-latency communications (URLLC) will be central for the development of various emerging mission-critical applications.
Deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLC in future 6G networks.
This tutorial illustrates how domain knowledge can be integrated into different kinds of deep learning algorithms for URLLC.
arXiv Detail & Related papers (2020-09-13T14:53:01Z) - CLAN: Continuous Learning using Asynchronous Neuroevolution on Commodity
Edge Devices [3.812706195714961]
We build a prototype distributed system of Raspberry Pis communicating via WiFi running NeuroEvolutionary (NE) learning and inference.
We evaluate the performance of such a collaborative system and detail the compute/communication characteristics of different arrangements of the system.
arXiv Detail & Related papers (2020-08-27T01:49:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.