Integrating Pre-Trained Language Model with Physical Layer Communications
- URL: http://arxiv.org/abs/2402.11656v2
- Date: Fri, 28 Jun 2024 23:00:45 GMT
- Title: Integrating Pre-Trained Language Model with Physical Layer Communications
- Authors: Ju-Hyung Lee, Dong-Ho Lee, Joohan Lee, Jay Pujara,
- Abstract summary: We introduce a practical ondevice AI communication framework, integrated with physical layer (PHY) communication functions.
Our framework incorporates end-to-end training with channel noise to enhance resilience, incorporates vector quantized variational autoencoders (VQ-VAE) for efficient and robust communication, and utilizes pre-trained encoder-decoder transformers for improved generalization capabilities.
- Score: 19.20941153929975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The burgeoning field of on-device AI communication, where devices exchange information directly through embedded foundation models, such as language models (LMs), requires robust, efficient, and generalizable communication frameworks. However, integrating these frameworks with existing wireless systems and effectively managing noise and bit errors pose significant challenges. In this work, we introduce a practical ondevice AI communication framework, integrated with physical layer (PHY) communication functions, demonstrated through its performance on a link-level simulator. Our framework incorporates end-to-end training with channel noise to enhance resilience, incorporates vector quantized variational autoencoders (VQ-VAE) for efficient and robust communication, and utilizes pre-trained encoder-decoder transformers for improved generalization capabilities. Simulations, across various communication scenarios, reveal that our framework achieves a 50% reduction in transmission size while demonstrating substantial generalization ability and noise robustness under standardized 3GPP channel models.
Related papers
- Adaptive Semantic Token Selection for AI-native Goal-oriented Communications [11.92172357956248]
We propose a novel design for AI-native goal-oriented communications.
We exploit transformer neural networks under dynamic inference constraints on bandwidth and computation.
We show that our model improves over state-of-the-art token selection mechanisms.
arXiv Detail & Related papers (2024-04-25T13:49:50Z) - Agent-driven Generative Semantic Communication with Cross-Modality and Prediction [57.335922373309074]
We propose a novel agent-driven generative semantic communication framework based on reinforcement learning.
In this work, we develop an agent-assisted semantic encoder with cross-modality capability, which can track the semantic changes, channel condition, to perform adaptive semantic extraction and sampling.
The effectiveness of the designed models has been verified using the UA-DETRAC dataset, demonstrating the performance gains of the overall A-GSC framework.
arXiv Detail & Related papers (2024-04-10T13:24:27Z) - Effective Communication with Dynamic Feature Compression [25.150266946722]
We study a prototypal system in which an observer must communicate its sensory data to a robot controlling a task.
We consider an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
We tested the proposed approach on the well-known CartPole reference control problem, obtaining a significant performance increase.
arXiv Detail & Related papers (2024-01-29T15:35:05Z) - Generative AI-aided Joint Training-free Secure Semantic Communications
via Multi-modal Prompts [89.04751776308656]
This paper proposes a GAI-aided SemCom system with multi-model prompts for accurate content decoding.
In response to security concerns, we introduce the application of covert communications aided by a friendly jammer.
arXiv Detail & Related papers (2023-09-05T23:24:56Z) - Design Principles for Model Generalization and Scalable AI Integration
in Radio Access Networks [2.846642778157227]
This paper emphasizes the pivotal role of achieving model generalization in enhancing performance and enabling scalable AI integration within radio communications.
We outline design principles for model generalization in three key domains: environment for robustness, intents for adaptability to system objectives, and control tasks for reducing AI-driven control loops.
We propose a learning architecture that leverages centralization of training and data management functionalities, combined with distributed data generation.
arXiv Detail & Related papers (2023-06-09T20:46:31Z) - Causal Semantic Communication for Digital Twins: A Generalizable
Imitation Learning Approach [74.25870052841226]
A digital twin (DT) leverages a virtual representation of the physical world, along with communication (e.g., 6G), computing, and artificial intelligence (AI) technologies to enable many connected intelligence services.
Wireless systems can exploit the paradigm of semantic communication (SC) for facilitating informed decision-making under strict communication constraints.
A novel framework called causal semantic communication (CSC) is proposed for DT-based wireless systems.
arXiv Detail & Related papers (2023-04-25T00:15:00Z) - Semantic and Effective Communication for Remote Control Tasks with
Dynamic Feature Compression [23.36744348465991]
Coordination of robotic swarms and the remote wireless control of industrial systems are among the major use cases for 5G and beyond systems.
In this work, we consider a prototypal system in which an observer must communicate its sensory data to an actor controlling a task.
We propose an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
arXiv Detail & Related papers (2023-01-14T11:43:56Z) - Enabling the Wireless Metaverse via Semantic Multiverse Communication [82.47169682083806]
Metaverse over wireless networks is an emerging use case of the sixth generation (6G) wireless systems.
We propose a novel semantic communication framework by decomposing the metaverse into human/machine agent-specific semantic multiverses (SMs)
An SM stored at each agent comprises a semantic encoder and a generator, leveraging recent advances in generative artificial intelligence (AI)
arXiv Detail & Related papers (2022-12-13T21:21:07Z) - Pretraining Techniques for Sequence-to-Sequence Voice Conversion [57.65753150356411]
Sequence-to-sequence (seq2seq) voice conversion (VC) models are attractive owing to their ability to convert prosody.
We propose to transfer knowledge from other speech processing tasks where large-scale corpora are easily available, typically text-to-speech (TTS) and automatic speech recognition (ASR)
We argue that VC models with such pretrained ASR or TTS model parameters can generate effective hidden representations for high-fidelity, highly intelligible converted speech.
arXiv Detail & Related papers (2020-08-07T11:02:07Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.