BONES: Near-Optimal Neural-Enhanced Video Streaming
- URL: http://arxiv.org/abs/2310.09920v2
- Date: Wed, 10 Apr 2024 05:39:23 GMT
- Title: BONES: Near-Optimal Neural-Enhanced Video Streaming
- Authors: Lingdong Wang, Simran Singh, Jacob Chakareski, Mohammad Hajiesmaili, Ramesh K. Sitaraman,
- Abstract summary: Neural-Enhanced Streaming (NES) incorporates this new approach into video streaming, allowing users to download low-quality video segments and then enhance them to obtain high-quality content without violating the playback of the video stream.
We introduce BONES, an NES control algorithm that jointly manages the network and computational resources to maximize the quality of experience (QoE) of the user.
- Score: 9.4673637872926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accessing high-quality video content can be challenging due to insufficient and unstable network bandwidth. Recent advances in neural enhancement have shown promising results in improving the quality of degraded videos through deep learning. Neural-Enhanced Streaming (NES) incorporates this new approach into video streaming, allowing users to download low-quality video segments and then enhance them to obtain high-quality content without violating the playback of the video stream. We introduce BONES, an NES control algorithm that jointly manages the network and computational resources to maximize the quality of experience (QoE) of the user. BONES formulates NES as a Lyapunov optimization problem and solves it in an online manner with near-optimal performance, making it the first NES algorithm to provide a theoretical performance guarantee. Comprehensive experimental results indicate that BONES increases QoE by 5\% to 20\% over state-of-the-art algorithms with minimal overhead. Our code is available at https://github.com/UMass-LIDS/bones.
Related papers
- RAIN: Real-time Animation of Infinite Video Stream [52.97171098038888]
RAIN is a pipeline solution capable of animating infinite video streams in real-time with low latency.
RAIN generates video frames with much shorter latency and faster speed, while maintaining long-range attention over extended video streams.
RAIN can animate characters in real-time with much better quality, accuracy, and consistency than competitors.
arXiv Detail & Related papers (2024-12-27T07:13:15Z) - Binarized Low-light Raw Video Enhancement [49.65466843856074]
Deep neural networks have achieved excellent performance on low-light raw video enhancement.
In this paper, we explore the feasibility of applying the extremely compact binary neural network (BNN) to low-light raw video enhancement.
arXiv Detail & Related papers (2024-03-29T02:55:07Z) - Super Efficient Neural Network for Compression Artifacts Reduction and
Super Resolution [2.0762623979470205]
We propose a lightweight convolutional neural network (CNN)-based algorithm which simultaneously performs artifacts reduction and super resolution.
The output shows a 4-6 increase in video multi-method assessment fusion (VMAF) score compared to traditional upscaling approaches.
arXiv Detail & Related papers (2024-01-26T04:11:14Z) - NU-Class Net: A Novel Approach for Video Quality Enhancement [1.7763979745248648]
This paper introduces NU-Class Net, an innovative deep-learning model designed to mitigate compression artifacts stemming from lossy compression codecs.
By employing the NU-Class Net, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos.
Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
arXiv Detail & Related papers (2024-01-02T11:46:42Z) - AccDecoder: Accelerated Decoding for Neural-enhanced Video Analytics [26.012783785622073]
Low-quality video is collected by existing surveillance systems because of poor quality cameras or over-compressed/pruned video streaming protocols.
We present AccDecoder, a novel accelerated decoder for real-time and neural network-based video analytics.
arXiv Detail & Related papers (2023-01-20T16:30:44Z) - CaDM: Codec-aware Diffusion Modeling for Neural-enhanced Video Streaming [15.115975994657514]
We present Codec-aware Diffusion Modeling (CaDM), a novel Neural-enhanced Video Streaming (NVS) paradigm.
First, CaDM improves the encoder's compression efficiency by simultaneously reducing resolution and color bit-depth video frames.
arXiv Detail & Related papers (2022-11-15T05:14:48Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z) - Low-Fidelity End-to-End Video Encoder Pre-training for Temporal Action
Localization [96.73647162960842]
TAL is a fundamental yet challenging task in video understanding.
Existing TAL methods rely on pre-training a video encoder through action classification supervision.
We introduce a novel low-fidelity end-to-end (LoFi) video encoder pre-training method.
arXiv Detail & Related papers (2021-03-28T22:18:14Z) - Encoding in the Dark Grand Challenge: An Overview [60.9261003831389]
We propose a Grand Challenge on encoding low-light video sequences.
VVC achieves a high performance compared to simply denoising the video source prior to encoding.
The quality of the video streams can be further improved by employing a post-processing image enhancement method.
arXiv Detail & Related papers (2020-05-07T08:22:56Z) - Towards Deep Learning Methods for Quality Assessment of
Computer-Generated Imagery [2.580765958706854]
In contrast to traditional video content, gaming content has special characteristics such as extremely high motion for some games.
In this paper, we outline our plan to build a deep learningbased quality metric for video gaming quality assessment.
arXiv Detail & Related papers (2020-05-02T14:08:39Z) - Non-Adversarial Video Synthesis with Learned Priors [53.26777815740381]
We focus on the problem of generating videos from latent noise vectors, without any reference input frames.
We develop a novel approach that jointly optimize the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning.
Our approach generates superior quality videos compared to the existing state-of-the-art methods.
arXiv Detail & Related papers (2020-03-21T02:57:33Z) - Non-Cooperative Game Theory Based Rate Adaptation for Dynamic Video
Streaming over HTTP [89.30855958779425]
Dynamic Adaptive Streaming over HTTP (DASH) has demonstrated to be an emerging and promising multimedia streaming technique.
We propose a novel algorithm to optimally allocate the limited export bandwidth of the server to multi-users to maximize their Quality of Experience (QoE) with fairness guaranteed.
arXiv Detail & Related papers (2019-12-27T01:19:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.