Accordion: A Communication-Aware Machine Learning Framework for Next
Generation Networks
- URL: http://arxiv.org/abs/2302.00623v1
- Date: Thu, 12 Jan 2023 10:30:43 GMT
- Title: Accordion: A Communication-Aware Machine Learning Framework for Next
Generation Networks
- Authors: Fadhel Ayed, Antonio De Domenico, Adrian Garcia-Rodriguez, David
Lopez-Perez
- Abstract summary: We advocate for the design of ad hoc artificial intelligence (AI)/machine learning (ML) models to facilitate their usage in future smart infrastructures based on communication networks.
We present a novel communication-aware ML framework, which enables an efficient AI/ML model transfer thanks to an overhauled model training and communication protocol.
- Score: 8.296411540693706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we advocate for the design of ad hoc artificial intelligence
(AI)/machine learning (ML) models to facilitate their usage in future smart
infrastructures based on communication networks. To motivate this, we first
review key operations identified by the 3GPP for transferring AI/ML models
through 5G networks and the main existing techniques to reduce their
communication overheads. We also present a novel communication-aware ML
framework, which we refer to as Accordion, that enables an efficient AI/ML
model transfer thanks to an overhauled model training and communication
protocol. We demonstrate the communication-related benefits of Accordion,
analyse key performance trade-offs, and discuss potential research directions
within this realm.
Related papers
- Large Language Model Based Generative Error Correction: A Challenge and Baselines for Speech Recognition, Speaker Tagging, and Emotion Recognition [110.8431434620642]
We introduce the generative speech transcription error correction (GenSEC) challenge.
This challenge comprises three post-ASR language modeling tasks: (i) post-ASR transcription correction, (ii) speaker tagging, and (iii) emotion recognition.
We discuss insights from baseline evaluations, as well as lessons learned for designing future evaluations.
arXiv Detail & Related papers (2024-09-15T16:32:49Z) - Large Language Models (LLMs) for Semantic Communication in Edge-based IoT Networks [0.0]
Large Language Models (LLMs) can understand and generate human-like text, based on extensive training on diverse datasets with billions of parameters.
LLMs can be used under the umbrella of semantic communication at the network edge for efficient communication in IoT networks.
arXiv Detail & Related papers (2024-07-30T16:57:41Z) - Integrating Pre-Trained Language Model with Physical Layer Communications [19.20941153929975]
We introduce a practical ondevice AI communication framework, integrated with physical layer (PHY) communication functions.
Our framework incorporates end-to-end training with channel noise to enhance resilience, incorporates vector quantized variational autoencoders (VQ-VAE) for efficient and robust communication, and utilizes pre-trained encoder-decoder transformers for improved generalization capabilities.
arXiv Detail & Related papers (2024-02-18T17:27:51Z) - Less Data, More Knowledge: Building Next Generation Semantic
Communication Networks [180.82142885410238]
We present the first rigorous vision of a scalable end-to-end semantic communication network.
We first discuss how the design of semantic communication networks requires a move from data-driven networks towards knowledge-driven ones.
By using semantic representation and languages, we show that the traditional transmitter and receiver now become a teacher and apprentice.
arXiv Detail & Related papers (2022-11-25T19:03:25Z) - Machine Learning for Performance Prediction of Channel Bonding in
Next-Generation IEEE 802.11 WLANs [1.0486135378491268]
We present the results gathered from Problem Statement13 (PS-013), organized by Universitat Pompeu Fabra (UPF)
The primary goal was predicting the performance of next-generation Wireless Local Area Networks (WLANs) applying Channel Bonding (CB) techniques.
arXiv Detail & Related papers (2021-05-29T05:33:07Z) - Model-Based Machine Learning for Communications [110.47840878388453]
We review existing strategies for combining model-based algorithms and machine learning from a high level perspective.
We focus on symbol detection, which is one of the fundamental tasks of communication receivers.
arXiv Detail & Related papers (2021-01-12T19:55:34Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Learning Structured Communication for Multi-agent Reinforcement Learning [104.64584573546524]
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
We propose a novel framework termed as Learning Structured Communication (LSC) by using a more flexible and efficient communication topology.
arXiv Detail & Related papers (2020-02-11T07:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.