Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications
- URL: http://arxiv.org/abs/2008.02608v1
- Date: Thu, 6 Aug 2020 12:37:14 GMT
- Title: Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications
- Authors: Jihong Park, Sumudu Samarakoon, Anis Elgabli, Joongheon Kim, Mehdi
Bennis, Seong-Lyun Kim, M\'erouane Debbah
- Abstract summary: Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
- Score: 55.65768284748698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) is a promising enabler for the fifth generation (5G)
communication systems and beyond. By imbuing intelligence into the network
edge, edge nodes can proactively carry out decision-making, and thereby react
to local environmental changes and disturbances while experiencing zero
communication latency. To achieve this goal, it is essential to cater for high
ML inference accuracy at scale under time-varying channel and network dynamics,
by continuously exchanging fresh data and ML model updates in a distributed
way. Taming this new kind of data traffic boils down to improving the
communication efficiency of distributed learning by optimizing communication
payload types, transmission techniques, and scheduling, as well as ML
architectures, algorithms, and data processing methods. To this end, this
article aims to provide a holistic overview of relevant communication and ML
principles, and thereby present communication-efficient and distributed
learning frameworks with selected use cases.
Related papers
- Exploring the Practicality of Federated Learning: A Survey Towards the Communication Perspective [1.088537320059347]
Federated Learning (FL) is a promising paradigm that offers significant advancements in privacy-preserving, decentralized machine learning.
However, the practical deployment of FL systems faces a significant bottleneck: the communication overhead.
This survey investigates various strategies and advancements made in communication-efficient FL.
arXiv Detail & Related papers (2024-05-30T19:21:33Z) - An Efficient Federated Learning Framework for Training Semantic
Communication System [29.593406320684448]
Most semantic communication systems are built upon advanced deep learning models.
Due to privacy and security concerns, the transmission of data is restricted.
We introduce a mechanism to aggregate the global model from clients, called FedLol.
arXiv Detail & Related papers (2023-10-20T02:45:20Z) - Communication-oriented Model Fine-tuning for Packet-loss Resilient
Distributed Inference under Highly Lossy IoT Networks [6.107812768939554]
distributed inference (DI) is a technique for real-time applications empowered by cutting-edge deep machine learning (ML) on resource-constrained Internet of things (IoT) devices.
In DI, computational tasks are offloaded from the IoT device to the edge server via lossy IoT networks.
We propose a communication-oriented model tuning (COMtune) to achieve highly accurate DI with low-latency but unreliable communication links.
arXiv Detail & Related papers (2021-12-17T09:40:21Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Linear Regression over Networks with Communication Guarantees [1.4271989597349055]
In connected autonomous systems, data transfer takes place over communication networks with often limited resources.
This paper examines algorithms for communication-efficient learning for linear regression tasks by exploiting the informativeness of the data.
arXiv Detail & Related papers (2021-03-06T15:28:21Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - A Tutorial on Ultra-Reliable and Low-Latency Communications in 6G:
Integrating Domain Knowledge into Deep Learning [115.75967665222635]
Ultra-reliable and low-latency communications (URLLC) will be central for the development of various emerging mission-critical applications.
Deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLC in future 6G networks.
This tutorial illustrates how domain knowledge can be integrated into different kinds of deep learning algorithms for URLLC.
arXiv Detail & Related papers (2020-09-13T14:53:01Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Distributed Learning in the Non-Convex World: From Batch to Streaming
Data, and Beyond [73.03743482037378]
Distributed learning has become a critical direction of the massively connected world envisioned by many.
This article discusses four key elements of scalable distributed processing and real-time data computation problems.
Practical issues and future research will also be discussed.
arXiv Detail & Related papers (2020-01-14T14:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.