Deep Learning-Based Real-Time Rate Control for Live Streaming on
Wireless Networks
- URL: http://arxiv.org/abs/2310.06857v1
- Date: Wed, 27 Sep 2023 17:53:35 GMT
- Title: Deep Learning-Based Real-Time Rate Control for Live Streaming on
Wireless Networks
- Authors: Matin Mortaheb, Mohammad A. Amir Khojastepour, Srimat T. Chakradhar,
Sennur Ulukus
- Abstract summary: Suboptimal selection of encoder parameters can lead to video quality loss due to bandwidth or introduction of artifacts due to packet loss.
A real-time deep learning based H.264 controller is proposed to dynamically estimate optimal encoder parameters with a negligible delay in real-time.
Remarkably, improvements of 10-20 dB in PSNR with repect to the state-of-the-art adaptive video streaming is achieved, with an average packet drop rate as low as 0.002.
- Score: 31.285983939625098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing wireless users with high-quality video content has become
increasingly important. However, ensuring consistent video quality poses
challenges due to variable encoded bitrate caused by dynamic video content and
fluctuating channel bitrate caused by wireless fading effects. Suboptimal
selection of encoder parameters can lead to video quality loss due to
underutilized bandwidth or the introduction of video artifacts due to packet
loss. To address this, a real-time deep learning based H.264 controller is
proposed. This controller leverages instantaneous channel quality data driven
from the physical layer, along with the video chunk, to dynamically estimate
the optimal encoder parameters with a negligible delay in real-time. The
objective is to maintain an encoded video bitrate slightly below the available
channel bitrate. Experimental results, conducted on both QCIF dataset and a
diverse selection of random videos from public datasets, validate the
effectiveness of the approach. Remarkably, improvements of 10-20 dB in PSNR
with repect to the state-of-the-art adaptive bitrate video streaming is
achieved, with an average packet drop rate as low as 0.002.
Related papers
- Adaptive Caching for Faster Video Generation with Diffusion Transformers [52.73348147077075]
Diffusion Transformers (DiTs) rely on larger models and heavier attention mechanisms, resulting in slower inference speeds.
We introduce a training-free method to accelerate video DiTs, termed Adaptive Caching (AdaCache)
We also introduce a Motion Regularization (MoReg) scheme to utilize video information within AdaCache, controlling the compute allocation based on motion content.
arXiv Detail & Related papers (2024-11-04T18:59:44Z) - Prediction and Reference Quality Adaptation for Learned Video Compression [54.58691829087094]
We propose a confidence-based prediction quality adaptation (PQA) module to provide explicit discrimination for the spatial and channel-wise prediction quality difference.
We also propose a reference quality adaptation (RQA) module and an associated repeat-long training strategy to provide dynamic spatially variant filters for diverse reference qualities.
arXiv Detail & Related papers (2024-06-20T09:03:26Z) - A Parametric Rate-Distortion Model for Video Transcoding [7.1741986121107235]
We introduce a parametric rate-distortion (R-D) transcoder model.
Our model excels at predicting distortion at various rates without the need for encoding the video.
It can be used to achieve visual quality improvement (in terms of PSNR) via trans-sizing.
arXiv Detail & Related papers (2024-04-13T15:37:57Z) - NU-Class Net: A Novel Approach for Video Quality Enhancement [1.7763979745248648]
This paper introduces NU-Class Net, an innovative deep-learning model designed to mitigate compression artifacts stemming from lossy compression codecs.
By employing the NU-Class Net, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos.
Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
arXiv Detail & Related papers (2024-01-02T11:46:42Z) - Deep Learning-Based Real-Time Quality Control of Standard Video
Compression for Live Streaming [31.285983939625098]
Real-time deep learning-based H.264 controller is proposed.
It estimates optimal encoder parameters based on the content of a video chunk with minimal delay.
It achieves improvements of up to 2.5 times in average bandwidth usage.
arXiv Detail & Related papers (2023-11-21T18:28:35Z) - Video Compression with Arbitrary Rescaling Network [8.489428003916622]
We propose a rate-guided arbitrary rescaling network (RARN) for video resizing before encoding.
The lightweight RARN structure can process FHD (1080p) content at real-time speed (91 FPS) and obtain a considerable rate reduction.
arXiv Detail & Related papers (2023-06-07T07:15:18Z) - Leveraging Bitstream Metadata for Fast, Accurate, Generalized Compressed
Video Quality Enhancement [74.1052624663082]
We develop a deep learning architecture capable of restoring detail to compressed videos.
We show that this improves restoration accuracy compared to prior compression correction methods.
We condition our model on quantization data which is readily available in the bitstream.
arXiv Detail & Related papers (2022-01-31T18:56:04Z) - Ultra-low bitrate video conferencing using deep image animation [7.263312285502382]
We propose a novel deep learning approach for ultra-low video compression for video conferencing applications.
We employ deep neural networks to encode motion information as keypoint displacement and reconstruct the video signal at the decoder side.
arXiv Detail & Related papers (2020-12-01T09:06:34Z) - Encoding in the Dark Grand Challenge: An Overview [60.9261003831389]
We propose a Grand Challenge on encoding low-light video sequences.
VVC achieves a high performance compared to simply denoising the video source prior to encoding.
The quality of the video streams can be further improved by employing a post-processing image enhancement method.
arXiv Detail & Related papers (2020-05-07T08:22:56Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.