Enhanced adaptive cross-layer scheme for low latency HEVC streaming over
Vehicular Ad-hoc Networks (VANETs)
- URL: http://arxiv.org/abs/2311.02664v1
- Date: Sun, 5 Nov 2023 14:19:38 GMT
- Title: Enhanced adaptive cross-layer scheme for low latency HEVC streaming over
Vehicular Ad-hoc Networks (VANETs)
- Authors: Mohamed Aymen Labiod, Mohamed Gharbi, Fran\c{c}ois-Xavier Coudoux,
Patrick Corlay and Noureddine Doghmane
- Abstract summary: HEVC is very promising for real-time video streaming through Vehicular Ad-hoc Networks (VANET)
A low complexity cross-layer mechanism is proposed to improve end-to-end performances of HEVC video streaming in VANET under low delay constraints.
The proposed mechanism offers significant improvements regarding video quality at the reception and end-to-end delay compared to the Enhanced Distributed Channel Access (EDCA) adopted in the 802.11p.
- Score: 2.2124180701409233
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Vehicular communication has become a reality guided by various applications.
Among those, high video quality delivery with low latency constraints required
by real-time applications constitutes a very challenging task. By dint of its
never-before-achieved compression level, the new High-Efficiency Video Coding
(HEVC) is very promising for real-time video streaming through Vehicular Ad-hoc
Networks (VANET). However, these networks have variable channel quality and
limited bandwidth. Therefore, ensuring satisfactory video quality on such
networks is a major challenge. In this work, a low complexity cross-layer
mechanism is proposed to improve end-to-end performances of HEVC video
streaming in VANET under low delay constraints. The idea is to assign to each
packet of the transmitted video the most appropriate Access Category (AC) queue
on the Medium Access Control (MAC) layer, considering the temporal prediction
structure of the video encoding process, the importance of the frame and the
state of the network traffic load. Simulation results demonstrate that for
different targeted low-delay video communication scenarios, the proposed
mechanism offers significant improvements regarding video quality at the
reception and end-to-end delay compared to the Enhanced Distributed Channel
Access (EDCA) adopted in the 802.11p. Both Quality of Service (QoS) and Quality
of Experience (QoE) evaluations have been also carried out to validate the
proposed approach.
Related papers
- VideoQA-SC: Adaptive Semantic Communication for Video Question Answering [21.0279034601774]
We propose an end-to-end SC system for video question answering tasks called VideoQA-SC.
Our goal is to accomplish VideoQA tasks directly based on video semantics over noisy or fading wireless channels.
Our results show the great potential of task-oriented SC system design for video applications.
arXiv Detail & Related papers (2024-05-17T06:11:10Z) - Cross-layer scheme for low latency multiple description video streaming
over Vehicular Ad-hoc NETworks (VANETs) [2.2124180701409233]
HEVC standard is very promising for real-time video streaming.
New state-of-the-art video coding (HEVC) standard is very promising for real-time video streaming.
We propose an original cross-layer system in order to enhance received video quality in vehicular communications.
arXiv Detail & Related papers (2023-11-05T14:34:58Z) - Deep Unsupervised Key Frame Extraction for Efficient Video
Classification [63.25852915237032]
This work presents an unsupervised method to retrieve the key frames, which combines Convolutional Neural Network (CNN) and Temporal Segment Density Peaks Clustering (TSDPC)
The proposed TSDPC is a generic and powerful framework and it has two advantages compared with previous works, one is that it can calculate the number of key frames automatically.
Furthermore, a Long Short-Term Memory network (LSTM) is added on the top of the CNN to further elevate the performance of classification.
arXiv Detail & Related papers (2022-11-12T20:45:35Z) - Neighbourhood Representative Sampling for Efficient End-to-end Video
Quality Assessment [60.57703721744873]
The increased resolution of real-world videos presents a dilemma between efficiency and accuracy for deep Video Quality Assessment (VQA)
In this work, we propose a unified scheme, spatial-temporal grid mini-cube sampling (St-GMS) to get a novel type of sample, named fragments.
With fragments and FANet, the proposed efficient end-to-end FAST-VQA and FasterVQA achieve significantly better performance than existing approaches on all VQA benchmarks.
arXiv Detail & Related papers (2022-10-11T11:38:07Z) - FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment
Sampling [54.31355080688127]
Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos.
We propose Grid Mini-patch Sampling (GMS), which allows consideration of local quality by sampling patches at their raw resolution.
We build the Fragment Attention Network (FANet) specially designed to accommodate fragments as inputs.
FAST-VQA improves state-of-the-art accuracy by around 10% while reducing 99.5% FLOPs on 1080P high-resolution videos.
arXiv Detail & Related papers (2022-07-06T11:11:43Z) - STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction [78.129039340528]
We propose a Stemporal Information-Preserving and Perception-Augmented Model (STIP) to solve the above two problems.
The proposed model aims to preserve thetemporal information for videos during the feature extraction and the state transitions.
Experimental results show that the proposed STIP can predict videos with more satisfactory visual quality compared with a variety of state-of-the-art methods.
arXiv Detail & Related papers (2022-06-09T09:49:04Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - DeepWiVe: Deep-Learning-Aided Wireless Video Transmission [0.0]
We present DeepWiVe, the first-ever end-to-end joint source-channel coding (JSCC) video transmission scheme.
We use deep neural networks (DNNs) to map video signals to channel symbols, combining video compression, channel coding, and modulation steps into a single neural transform.
Our results show that DeepWiVe can overcome the cliff-effect, which is prevalent in conventional separation-based digital communication schemes.
arXiv Detail & Related papers (2021-11-25T11:34:24Z) - CANS: Communication Limited Camera Network Self-Configuration for
Intelligent Industrial Surveillance [8.360870648463653]
Realtime and intelligent video surveillance via camera networks involve computation-intensive vision detection tasks with massive video data.
Multiple video streams compete for limited communication resources on the link between edge devices and camera networks.
An adaptive camera network self-configuration method (CANS) of video surveillance is proposed to cope with multiple video streams of heterogeneous quality of service.
arXiv Detail & Related papers (2021-09-13T01:54:33Z) - Non-Cooperative Game Theory Based Rate Adaptation for Dynamic Video
Streaming over HTTP [89.30855958779425]
Dynamic Adaptive Streaming over HTTP (DASH) has demonstrated to be an emerging and promising multimedia streaming technique.
We propose a novel algorithm to optimally allocate the limited export bandwidth of the server to multi-users to maximize their Quality of Experience (QoE) with fairness guaranteed.
arXiv Detail & Related papers (2019-12-27T01:19:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.