Cross-layer scheme for low latency multiple description video streaming
over Vehicular Ad-hoc NETworks (VANETs)
- URL: http://arxiv.org/abs/2311.13603v1
- Date: Sun, 5 Nov 2023 14:34:58 GMT
- Title: Cross-layer scheme for low latency multiple description video streaming
over Vehicular Ad-hoc NETworks (VANETs)
- Authors: Mohamed Aymen Labiod, Mohamed Gharbi, Francois-Xavier Coudoux, Patrick
Corlay, Noureddine Doghmane
- Abstract summary: HEVC standard is very promising for real-time video streaming.
New state-of-the-art video coding (HEVC) standard is very promising for real-time video streaming.
We propose an original cross-layer system in order to enhance received video quality in vehicular communications.
- Score: 2.2124180701409233
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There is nowadays a growing demand in vehicular communications for real-time
applications requiring video assistance. The new state-of-the-art
high-efficiency video coding (HEVC) standard is very promising for real-time
video streaming. It offers high coding efficiency, as well as dedicated low
delay coding structures. Among these, the all intra (AI) coding structure
guarantees minimal coding time at the expense of higher video bitrates, which
therefore penalizes transmission performances. In this work, we propose an
original cross-layer system in order to enhance received video quality in
vehicular communications. The system is low complex and relies on a multiple
description coding (MDC) approach. It is based on an adaptive mapping mechanism
applied at the IEEE 802.11p standard medium access control (MAC) layer.
Simulation results in a realistic vehicular environment demonstrate that for
low delay video communications, the proposed method provides significant video
quality improvements on the receiver side.
Related papers
- Generative Video Semantic Communication via Multimodal Semantic Fusion with Large Model [55.71885688565501]
We propose a scalable generative video semantic communication framework that extracts and transmits semantic information to achieve high-quality video reconstruction.
Specifically, at the transmitter, description and other condition signals are extracted from the source video, functioning as text and structural semantics, respectively.
At the receiver, the diffusion-based GenAI large models are utilized to fuse the semantics of the multiple modalities for reconstructing the video.
arXiv Detail & Related papers (2025-02-19T15:59:07Z) - RL-RC-DoT: A Block-level RL agent for Task-Aware Video Compression [68.31184784672227]
In modern applications such as autonomous driving, an overwhelming majority of videos serve as input for AI systems performing tasks.
It is therefore useful to optimize the encoder for a downstream task instead of for image quality.
Here, we address this challenge by controlling the Quantization Parameters (QPs) at the macro-block level to optimize the downstream task.
arXiv Detail & Related papers (2025-01-21T15:36:08Z) - When Video Coding Meets Multimodal Large Language Models: A Unified Paradigm for Video Coding [118.72266141321647]
Cross-Modality Video Coding (CMVC) is a pioneering approach to explore multimodality representation and video generative models in video coding.
During decoding, previously encoded components and video generation models are leveraged to create multiple encoding-decoding modes.
Experiments indicate that TT2V achieves effective semantic reconstruction, while IT2V exhibits competitive perceptual consistency.
arXiv Detail & Related papers (2024-08-15T11:36:18Z) - Object-Attribute-Relation Representation Based Video Semantic Communication [35.87160453583808]
We introduce the use of object-attribute-relation (OAR) as a semantic framework for videos to facilitate low bit-rate coding.
We utilize OAR sequences for both low bit-rate representation and generative video reconstruction.
Our experiments on traffic surveillance video datasets assess the effectiveness of our approach in terms of video transmission performance.
arXiv Detail & Related papers (2024-06-15T02:19:31Z) - Low-Latency Neural Stereo Streaming [6.49558286032794]
Low-Latency neural for Stereo video Streaming (LLSS) is a novel parallel stereo video coding method designed for low-latency stereo video streaming.
LLSS processes left and right views in parallel, minimizing latency; all while substantially improving R-D performance compared to both existing neural and conventional codecs.
arXiv Detail & Related papers (2024-03-26T17:11:51Z) - Enhanced adaptive cross-layer scheme for low latency HEVC streaming over
Vehicular Ad-hoc Networks (VANETs) [2.2124180701409233]
HEVC is very promising for real-time video streaming through Vehicular Ad-hoc Networks (VANET)
A low complexity cross-layer mechanism is proposed to improve end-to-end performances of HEVC video streaming in VANET under low delay constraints.
The proposed mechanism offers significant improvements regarding video quality at the reception and end-to-end delay compared to the Enhanced Distributed Channel Access (EDCA) adopted in the 802.11p.
arXiv Detail & Related papers (2023-11-05T14:19:38Z) - Region of Interest (ROI) based adaptive cross-layer system for real-time
video streaming over Vehicular Ad-hoc NETworks (VANETs) [2.2124180701409233]
We propose an algorithm that improves end-to-end video transmission quality in a vehicular context.
The proposed low complexity solution gives highest priority to the scene regions of interest.
Realistic VANET simulation results demonstrate that for HEVC compressed video communications, the proposed system offers PSNR gains up to 11dB on the ROI part.
arXiv Detail & Related papers (2023-11-05T13:56:04Z) - VNVC: A Versatile Neural Video Coding Framework for Efficient
Human-Machine Vision [59.632286735304156]
It is more efficient to enhance/analyze the coded representations directly without decoding them into pixels.
We propose a versatile neural video coding (VNVC) framework, which targets learning compact representations to support both reconstruction and direct enhancement/analysis.
arXiv Detail & Related papers (2023-06-19T03:04:57Z) - Wireless Deep Video Semantic Transmission [14.071114007641313]
We propose a new class of high-efficiency deep joint source-channel coding methods to achieve end-to-end video transmission over wireless channels.
Our framework is collected under the name deep video semantic transmission (DVST)
arXiv Detail & Related papers (2022-05-26T03:26:43Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z) - An Emerging Coding Paradigm VCM: A Scalable Coding Approach Beyond
Feature and Signal [99.49099501559652]
Video Coding for Machine (VCM) aims to bridge the gap between visual feature compression and classical video coding.
We employ a conditional deep generation network to reconstruct video frames with the guidance of learned motion pattern.
By learning to extract sparse motion pattern via a predictive model, the network elegantly leverages the feature representation to generate the appearance of to-be-coded frames.
arXiv Detail & Related papers (2020-01-09T14:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.