A Multi-Task Oriented Semantic Communication Framework for Autonomous Vehicles
- URL: http://arxiv.org/abs/2403.12997v1
- Date: Wed, 6 Mar 2024 12:04:24 GMT
- Title: A Multi-Task Oriented Semantic Communication Framework for Autonomous Vehicles
- Authors: Eslam Eldeeb, Mohammad Shehab, Hirley Alves,
- Abstract summary: This work presents a multi-task-oriented semantic communication framework for connected and autonomous vehicles.
We propose a convolutional autoencoder (CAE) that performs the semantic encoding of the road traffic signs.
These encoded images are then transmitted from one CAV to another CAV through satellite in challenging weather conditions.
- Score: 5.779316179788962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented semantic communication is an emerging technology that transmits only the relevant semantics of a message instead of the whole message to achieve a specific task. It reduces latency, compresses the data, and is more robust in low SNR scenarios. This work presents a multi-task-oriented semantic communication framework for connected and autonomous vehicles (CAVs). We propose a convolutional autoencoder (CAE) that performs the semantic encoding of the road traffic signs. These encoded images are then transmitted from one CAV to another CAV through satellite in challenging weather conditions where visibility is impaired. In addition, we propose task-oriented semantic decoders for image reconstruction and classification tasks. Simulation results show that the proposed framework outperforms the conventional schemes, such as QAM-16, regarding the reconstructed image's similarity and the classification's accuracy. In addition, it can save up to 89 % of the bandwidth by sending fewer bits.
Related papers
- Visual Language Model based Cross-modal Semantic Communication Systems [42.321208020228894]
We propose a novel Vision-Language Model-based Cross-modal Semantic Communication system.
The VLM-CSC comprises three novel components.
The experimental simulations validate the effectiveness, adaptability, and robustness of the CSC system.
arXiv Detail & Related papers (2024-05-06T08:59:16Z) - Agent-driven Generative Semantic Communication with Cross-Modality and Prediction [57.335922373309074]
We propose a novel agent-driven generative semantic communication framework based on reinforcement learning.
In this work, we develop an agent-assisted semantic encoder with cross-modality capability, which can track the semantic changes, channel condition, to perform adaptive semantic extraction and sampling.
The effectiveness of the designed models has been verified using the UA-DETRAC dataset, demonstrating the performance gains of the overall A-GSC framework.
arXiv Detail & Related papers (2024-04-10T13:24:27Z) - Importance-Aware Image Segmentation-based Semantic Communication for
Autonomous Driving [9.956303020078488]
This article studies the problem of image segmentation-based semantic communication in autonomous driving.
We propose a vehicular image segmentation-oriented semantic communication system, termed VIS-SemCom.
The proposed VIS-SemCom can achieve a coding gain of nearly 6 dB with a 60% mean intersection over union (mIoU), reduce the transmitted data amount by up to 70% with a 60% mIoU, and improve the segmentation intersection over union (IoU) of important objects by 4%, compared to traditional transmission scheme.
arXiv Detail & Related papers (2024-01-16T18:14:44Z) - Scalable AI Generative Content for Vehicular Network Semantic
Communication [46.589349524682966]
This paper unveils a scalable Artificial Intelligence Generated Content (AIGC) system that leverages an encoder-decoder architecture.
The system converts images into textual representations and reconstructs them into quality-acceptable images, optimizing transmission for vehicular network semantic communication.
Experimental results suggest that the proposed method surpasses the baseline in perceiving vehicles in blind spots and effectively compresses communication data.
arXiv Detail & Related papers (2023-11-23T02:57:04Z) - Generative AI-aided Joint Training-free Secure Semantic Communications
via Multi-modal Prompts [89.04751776308656]
This paper proposes a GAI-aided SemCom system with multi-model prompts for accurate content decoding.
In response to security concerns, we introduce the application of covert communications aided by a friendly jammer.
arXiv Detail & Related papers (2023-09-05T23:24:56Z) - Communication-Efficient Framework for Distributed Image Semantic
Wireless Transmission [68.69108124451263]
Federated learning-based semantic communication (FLSC) framework for multi-task distributed image transmission with IoT devices.
Each link is composed of a hierarchical vision transformer (HVT)-based extractor and a task-adaptive translator.
Channel state information-based multiple-input multiple-output transmission module designed to combat channel fading and noise.
arXiv Detail & Related papers (2023-08-07T16:32:14Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Task-Oriented Image Transmission for Scene Classification in Unmanned
Aerial Systems [46.64800170644672]
We propose a new aerial image transmission paradigm for the scene classification task.
A lightweight model is developed on the front-end UAV for semantic blocks transmission with perception of images and channel conditions.
In order to achieve the tradeoff between transmission latency and classification accuracy, deep reinforcement learning is used.
arXiv Detail & Related papers (2021-12-21T02:44:49Z) - Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-based
Image Retrieval [55.29233996427243]
Low-shot sketch-based image retrieval is an emerging task in computer vision.
In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks.
For solving these tasks, we propose a semantically aligned cycle-consistent generative adversarial network (SEM-PCYC)
Our results demonstrate a significant boost in any-shot performance over the state-of-the-art on the extended version of the Sketchy, TU-Berlin and QuickDraw datasets.
arXiv Detail & Related papers (2020-06-20T22:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.