Real-Time Neural Video Recovery and Enhancement on Mobile Devices
- URL: http://arxiv.org/abs/2307.12152v1
- Date: Sat, 22 Jul 2023 19:52:04 GMT
- Title: Real-Time Neural Video Recovery and Enhancement on Mobile Devices
- Authors: Zhaoyuan He, Yifan Yang, Lili Qiu, Kyoungjun Park
- Abstract summary: We present a novel approach for real-time video enhancement on mobile devices.
We have implemented our approach on an iPhone 12, and it can support 30 frames per second (FPS)
Our approach results in a significant increase in video QoE of 24% - 82% in our video streaming system.
- Score: 15.343787475565836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As mobile devices become increasingly popular for video streaming, it's
crucial to optimize the streaming experience for these devices. Although deep
learning-based video enhancement techniques are gaining attention, most of them
cannot support real-time enhancement on mobile devices. Additionally, many of
these techniques are focused solely on super-resolution and cannot handle
partial or complete loss or corruption of video frames, which is common on the
Internet and wireless networks.
To overcome these challenges, we present a novel approach in this paper. Our
approach consists of (i) a novel video frame recovery scheme, (ii) a new
super-resolution algorithm, and (iii) a receiver enhancement-aware video bit
rate adaptation algorithm. We have implemented our approach on an iPhone 12,
and it can support 30 frames per second (FPS). We have evaluated our approach
in various networks such as WiFi, 3G, 4G, and 5G networks. Our evaluation shows
that our approach enables real-time enhancement and results in a significant
increase in video QoE (Quality of Experience) of 24\% - 82\% in our video
streaming system.
Related papers
- Adaptive Caching for Faster Video Generation with Diffusion Transformers [52.73348147077075]
Diffusion Transformers (DiTs) rely on larger models and heavier attention mechanisms, resulting in slower inference speeds.
We introduce a training-free method to accelerate video DiTs, termed Adaptive Caching (AdaCache)
We also introduce a Motion Regularization (MoReg) scheme to utilize video information within AdaCache, controlling the compute allocation based on motion content.
arXiv Detail & Related papers (2024-11-04T18:59:44Z) - AIM 2024 Challenge on Efficient Video Super-Resolution for AV1 Compressed Content [56.552444900457395]
Video super-resolution (VSR) is a critical task for enhancing low-bitrate and low-resolution videos, particularly in streaming applications.
In this work, we compile different methods to address these challenges, the solutions are end-to-end real-time video super-resolution frameworks.
The proposed solutions tackle video up-scaling for two applications: 540p to 4K (x4) as a general case, and 360p to 1080p (x3) more tailored towards mobile devices.
arXiv Detail & Related papers (2024-09-25T18:12:19Z) - Towards Real-time Video Compressive Sensing on Mobile Devices [18.96331666620252]
Video Snapshot Compressive Imaging (SCI) uses a low-speed 2D camera to capture high-speed scenes as snapshot compressed measurements.
We present an effective approach for video SCI reconstruction, dubbed MobileSCI, which can run at real-time speed on the mobile devices.
arXiv Detail & Related papers (2024-08-14T13:03:31Z) - Power Efficient Video Super-Resolution on Mobile NPUs with Deep
Learning, Mobile AI & AIM 2022 challenge: Report [97.01510729548531]
We propose a real-time video super-resolution solution for mobile NPUs optimized for low energy consumption.
Models were evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit.
All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption.
arXiv Detail & Related papers (2022-11-07T22:33:19Z) - Real-Time Video Super-Resolution on Smartphones with Deep Learning,
Mobile AI 2021 Challenge: Report [135.69469815238193]
Video super-resolution has become one of the most important mobile-related problems due to the rise of video communication and streaming services.
To address this problem, we introduce the first Mobile AI challenge, where the target is to develop an end-to-end deep learning-based video super-resolution solutions.
The proposed solutions are fully compatible with any mobile GPU and can upscale videos to HD resolution at up to 80 FPS while demonstrating high fidelity results.
arXiv Detail & Related papers (2021-05-17T13:40:50Z) - An Efficient Recurrent Adversarial Framework for Unsupervised Real-Time
Video Enhancement [132.60976158877608]
We propose an efficient adversarial video enhancement framework that learns directly from unpaired video examples.
In particular, our framework introduces new recurrent cells that consist of interleaved local and global modules for implicit integration of spatial and temporal information.
The proposed design allows our recurrent cells to efficiently propagate-temporal-information across frames and reduces the need for high complexity networks.
arXiv Detail & Related papers (2020-12-24T00:03:29Z) - Real-Time Video Inference on Edge Devices via Adaptive Model Streaming [9.101956442584251]
Real-time video inference on edge devices like mobile phones and drones is challenging due to the high cost of Deep Neural Networks.
We present Adaptive Model Streaming (AMS), a new approach to improving performance of efficient lightweight models for video inference on edge devices.
arXiv Detail & Related papers (2020-06-11T17:25:44Z) - Deep Space-Time Video Upsampling Networks [47.62807427163614]
Video super-resolution (VSR) and frame (FI) are traditional computer vision problems.
We propose an end-to-end framework for the space-time video upsampling by efficiently merging VSR and FI into a joint framework.
Results show better results both quantitatively and qualitatively, while reducing the time (x7 faster) and the number of parameters (30%) compared to baselines.
arXiv Detail & Related papers (2020-04-06T07:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.