Compressed Domain Prior-Guided Video Super-Resolution for Cloud Gaming Content
- URL: http://arxiv.org/abs/2501.01773v1
- Date: Fri, 03 Jan 2025 12:01:36 GMT
- Title: Compressed Domain Prior-Guided Video Super-Resolution for Cloud Gaming Content
- Authors: Qizhe Wang, Qian Yin, Zhimeng Huang, Weijia Jiang, Yi Su, Siwei Ma, Jiaqi Zhang,
- Abstract summary: We propose a novel lightweight network called Coding Prior-Guided Super-Resolution (CPGSR) to address the SR challenges in compressed game video content.
Inspired by the quantization in video coding, we propose a partitioned focal frequency loss to effectively guide the model's focus on preserving high-frequency information.
- Score: 39.55748992685093
- License:
- Abstract: Cloud gaming is an advanced form of Internet service that necessitates local terminals to decode within limited resources and time latency. Super-Resolution (SR) techniques are often employed on these terminals as an efficient way to reduce the required bit-rate bandwidth for cloud gaming. However, insufficient attention has been paid to SR of compressed game video content. Most SR networks amplify block artifacts and ringing effects in decoded frames while ignoring edge details of game content, leading to unsatisfactory reconstruction results. In this paper, we propose a novel lightweight network called Coding Prior-Guided Super-Resolution (CPGSR) to address the SR challenges in compressed game video content. First, we design a Compressed Domain Guided Block (CDGB) to extract features of different depths from coding priors, which are subsequently integrated with features from the U-net backbone. Then, a series of re-parameterization blocks are utilized for reconstruction. Ultimately, inspired by the quantization in video coding, we propose a partitioned focal frequency loss to effectively guide the model's focus on preserving high-frequency information. Extensive experiments demonstrate the advancement of our approach.
Related papers
- Large Motion Video Autoencoding with Cross-modal Video VAE [52.13379965800485]
Video Variational Autoencoder (VAE) is essential for reducing video redundancy and facilitating efficient video generation.
Existing Video VAEs have begun to address temporal compression; however, they often suffer from inadequate reconstruction performance.
We present a novel and powerful video autoencoder capable of high-fidelity video encoding.
arXiv Detail & Related papers (2024-12-23T18:58:24Z) - FrameCorr: Adaptive, Autoencoder-based Neural Compression for Video Reconstruction in Resource and Timing Constrained Network Settings [0.18906710320196732]
Existing video compression methods face difficulties in recovering compressed data when incomplete data is provided.
We introduce FrameCorr, a deep-learning based solution that utilizes previously received data to predict the missing segments of a frame.
arXiv Detail & Related papers (2024-09-04T05:19:57Z) - NU-Class Net: A Novel Approach for Video Quality Enhancement [1.7763979745248648]
This paper introduces NU-Class Net, an innovative deep-learning model designed to mitigate compression artifacts stemming from lossy compression codecs.
By employing the NU-Class Net, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos.
Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
arXiv Detail & Related papers (2024-01-02T11:46:42Z) - Enabling Real-time Neural Recovery for Cloud Gaming on Mobile Devices [11.530719133935847]
We propose a new method for recovering lost or corrupted video frames in cloud gaming.
Unlike traditional video frame recovery, our approach uses game states to significantly enhance recovery accuracy.
We develop a holistic system that consists of (i) efficiently extracting game states, (ii) modifying H.264 video decoder to generate a mask to indicate which portions of video frames need recovery, and (iii) designing a novel neural network to recover either complete or partial video frames.
arXiv Detail & Related papers (2023-07-15T16:45:01Z) - VNVC: A Versatile Neural Video Coding Framework for Efficient
Human-Machine Vision [59.632286735304156]
It is more efficient to enhance/analyze the coded representations directly without decoding them into pixels.
We propose a versatile neural video coding (VNVC) framework, which targets learning compact representations to support both reconstruction and direct enhancement/analysis.
arXiv Detail & Related papers (2023-06-19T03:04:57Z) - GRACE: Loss-Resilient Real-Time Video through Neural Codecs [31.006987868475683]
In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements.
We present a loss-resilient real-time video system called GRACE, which preserves the user's quality of experience (QE) across a wide range of packet losses.
arXiv Detail & Related papers (2023-05-21T03:50:44Z) - Exploring Long- and Short-Range Temporal Information for Learned Video
Compression [54.91301930491466]
We focus on exploiting the unique characteristics of video content and exploring temporal information to enhance compression performance.
For long-range temporal information exploitation, we propose temporal prior that can update continuously within the group of pictures (GOP) during inference.
In that case temporal prior contains valuable temporal information of all decoded images within the current GOP.
In detail, we design a hierarchical structure to achieve multi-scale compensation.
arXiv Detail & Related papers (2022-08-07T15:57:18Z) - Microdosing: Knowledge Distillation for GAN based Compression [18.140328230701233]
We show how to leverage knowledge distillation to obtain equally capable image decoders at a fraction of the original number of parameters.
This allows us to reduce the model size by a factor of 20 and to achieve 50% reduction in decoding time.
arXiv Detail & Related papers (2022-01-07T14:27:16Z) - Transcoded Video Restoration by Temporal Spatial Auxiliary Network [64.63157339057912]
We propose a new method, temporal spatial auxiliary network (TSAN), for transcoded video restoration.
The experimental results demonstrate that the performance of the proposed method is superior to that of the previous techniques.
arXiv Detail & Related papers (2021-12-15T08:10:23Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.