LL-GABR: Energy Efficient Live Video Streaming Using Reinforcement
Learning
- URL: http://arxiv.org/abs/2402.09392v1
- Date: Wed, 14 Feb 2024 18:43:19 GMT
- Title: LL-GABR: Energy Efficient Live Video Streaming Using Reinforcement
Learning
- Authors: Adithya Raman, Bekir Turkkan and Tevfik Kosar
- Abstract summary: We propose LLGABR, a deep reinforcement learning approach that models the QoE using perceived video quality instead of energy consumption.
We show that LLGABR outperforms the state-of-the-art approaches by up to 44% in terms of perceptual QoE and a 73% increase in energy efficiency.
- Score: 3.360922672565235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the recent years, research and development in adaptive bitrate (ABR)
algorithms for live video streaming have been successful in improving users'
quality of experience (QoE) by reducing latency to near real-time levels while
delivering higher bitrate videos with minimal rebuffering time. However, the
QoE models used by these ABR algorithms do not take into account that a large
portion of live video streaming clients use mobile devices where a higher
bitrate does not necessarily translate into higher perceived quality. Ignoring
perceived quality results in playing videos at higher bitrates without a
significant increase in perceptual video quality and becomes a burden for
battery-constrained mobile devices due to higher energy consumption. In this
paper, we propose LL-GABR, a deep reinforcement learning approach that models
the QoE using perceived video quality instead of bitrate and uses energy
consumption along with other metrics like latency, rebuffering events, and
smoothness. LL-GABR makes no assumptions about the underlying video,
environment, or network settings and can operate flexibly on different video
titles, each having a different bitrate encoding ladder without additional
re-training, unlike existing learning-based ABRs. Trace-driven experimental
results show that LL-GABR outperforms the state-of-the-art approaches by up to
44% in terms of perceptual QoE and a 73% increase in energy efficiency as a
result of reducing net energy consumption by 11%.
Related papers
- Plug-and-Play Versatile Compressed Video Enhancement [57.62582951699999]
Video compression effectively reduces the size of files, making it possible for real-time cloud computing.
However, it comes at the cost of visual quality, challenges the robustness of downstream vision models.
We present a versatile-aware enhancement framework that adaptively enhance videos under different compression settings.
arXiv Detail & Related papers (2025-04-21T18:39:31Z) - H3AE: High Compression, High Speed, and High Quality AutoEncoder for Video Diffusion Models [76.1519545010611]
Autoencoder (AE) is the key to the success of latent diffusion models for image and video generation.
In this work, we examine the architecture design choices and optimize the computation distribution to obtain efficient and high-compression video AEs.
Our AE achieves an ultra-high compression ratio and real-time decoding speed on mobile while outperforming prior art in terms of reconstruction metrics.
arXiv Detail & Related papers (2025-04-14T17:59:06Z) - Adaptive Caching for Faster Video Generation with Diffusion Transformers [52.73348147077075]
Diffusion Transformers (DiTs) rely on larger models and heavier attention mechanisms, resulting in slower inference speeds.
We introduce a training-free method to accelerate video DiTs, termed Adaptive Caching (AdaCache)
We also introduce a Motion Regularization (MoReg) scheme to utilize video information within AdaCache, controlling the compute allocation based on motion content.
arXiv Detail & Related papers (2024-11-04T18:59:44Z) - Binarized Low-light Raw Video Enhancement [49.65466843856074]
Deep neural networks have achieved excellent performance on low-light raw video enhancement.
In this paper, we explore the feasibility of applying the extremely compact binary neural network (BNN) to low-light raw video enhancement.
arXiv Detail & Related papers (2024-03-29T02:55:07Z) - NU-Class Net: A Novel Approach for Video Quality Enhancement [1.7763979745248648]
This paper introduces NU-Class Net, an innovative deep-learning model designed to mitigate compression artifacts stemming from lossy compression codecs.
By employing the NU-Class Net, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos.
Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
arXiv Detail & Related papers (2024-01-02T11:46:42Z) - Deep Learning-Based Real-Time Quality Control of Standard Video
Compression for Live Streaming [31.285983939625098]
Real-time deep learning-based H.264 controller is proposed.
It estimates optimal encoder parameters based on the content of a video chunk with minimal delay.
It achieves improvements of up to 2.5 times in average bandwidth usage.
arXiv Detail & Related papers (2023-11-21T18:28:35Z) - Deep Learning-Based Real-Time Rate Control for Live Streaming on
Wireless Networks [31.285983939625098]
Suboptimal selection of encoder parameters can lead to video quality loss due to bandwidth or introduction of artifacts due to packet loss.
A real-time deep learning based H.264 controller is proposed to dynamically estimate optimal encoder parameters with a negligible delay in real-time.
Remarkably, improvements of 10-20 dB in PSNR with repect to the state-of-the-art adaptive video streaming is achieved, with an average packet drop rate as low as 0.002.
arXiv Detail & Related papers (2023-09-27T17:53:35Z) - AccDecoder: Accelerated Decoding for Neural-enhanced Video Analytics [26.012783785622073]
Low-quality video is collected by existing surveillance systems because of poor quality cameras or over-compressed/pruned video streaming protocols.
We present AccDecoder, a novel accelerated decoder for real-time and neural network-based video analytics.
arXiv Detail & Related papers (2023-01-20T16:30:44Z) - FAVER: Blind Quality Prediction of Variable Frame Rate Videos [47.951054608064126]
Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales.
We propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER)
Our experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost.
arXiv Detail & Related papers (2022-01-05T07:54:12Z) - High Frame Rate Video Quality Assessment using VMAF and Entropic
Differences [50.265638572116984]
The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos.
In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor.
We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
arXiv Detail & Related papers (2021-09-27T04:08:12Z) - NeuSaver: Neural Adaptive Power Consumption Optimization for Mobile
Video Streaming [3.3194866396158003]
NeuSaver applies an adaptive frame rate to each video chunk without compromising user experience.
NeuSaver generates an optimal policy that determines the appropriate frame rate for each video chunk.
NeuSaver effectively reduces the power consumption of mobile devices when streaming video by an average of 16.14% and up to 23.12% while achieving high QoE.
arXiv Detail & Related papers (2021-07-15T05:17:17Z) - Multi-level Wavelet-based Generative Adversarial Network for Perceptual
Quality Enhancement of Compressed Video [51.631731922593225]
Existing methods mainly focus on enhancing the objective quality of compressed video while ignoring its perceptual quality.
We propose a novel generative adversarial network (GAN) based on multi-level wavelet packet transform (WPT) to enhance the perceptual quality of compressed video.
arXiv Detail & Related papers (2020-08-02T15:01:38Z) - Subjective and Objective Quality Assessment of High Frame Rate Videos [60.970191379802095]
High frame rate (HFR) videos are becoming increasingly common with the tremendous popularity of live, high-action streaming content such as sports.
Live-YT-HFR dataset is comprised of 480 videos having 6 different frame rates, obtained from 16 diverse contents.
To obtain subjective labels on the videos, we conducted a human study yielding 19,000 human quality ratings obtained from a pool of 85 human subjects.
arXiv Detail & Related papers (2020-07-22T19:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.