GameIR: A Large-Scale Synthesized Ground-Truth Dataset for Image Restoration over Gaming Content
- URL: http://arxiv.org/abs/2408.16866v1
- Date: Thu, 29 Aug 2024 19:11:46 GMT
- Title: GameIR: A Large-Scale Synthesized Ground-Truth Dataset for Image Restoration over Gaming Content
- Authors: Lebin Zhou, Kun Han, Nam Ling, Wei Wang, Wei Jiang,
- Abstract summary: We develop GameIR, a large-scale computer-synthesized ground-truth dataset to fill in the blanks.
We provide 19200 LR-HR paired ground-truth frames coming from 640 videos rendered at 720p and 1440p for this task.
The second is novel view synthesis (NVS), to support the multiview gaming solution of rendering and transferring part of the multiview frames and generating the remaining frames on the client side.
- Score: 16.07538127436932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image restoration methods like super-resolution and image synthesis have been successfully used in commercial cloud gaming products like NVIDIA's DLSS. However, restoration over gaming content is not well studied by the general public. The discrepancy is mainly caused by the lack of ground-truth gaming training data that match the test cases. Due to the unique characteristics of gaming content, the common approach of generating pseudo training data by degrading the original HR images results in inferior restoration performance. In this work, we develop GameIR, a large-scale high-quality computer-synthesized ground-truth dataset to fill in the blanks, targeting at two different applications. The first is super-resolution with deferred rendering, to support the gaming solution of rendering and transferring LR images only and restoring HR images on the client side. We provide 19200 LR-HR paired ground-truth frames coming from 640 videos rendered at 720p and 1440p for this task. The second is novel view synthesis (NVS), to support the multiview gaming solution of rendering and transferring part of the multiview frames and generating the remaining frames on the client side. This task has 57,600 HR frames from 960 videos of 160 scenes with 6 camera views. In addition to the RGB frames, the GBuffers during the deferred rendering stage are also provided, which can be used to help restoration. Furthermore, we evaluate several SOTA super-resolution algorithms and NeRF-based NVS algorithms over our dataset, which demonstrates the effectiveness of our ground-truth GameIR data in improving restoration performance for gaming content. Also, we test the method of incorporating the GBuffers as additional input information for helping super-resolution and NVS. We release our dataset and models to the general public to facilitate research on restoration methods over gaming content.
Related papers
- Compressed Domain Prior-Guided Video Super-Resolution for Cloud Gaming Content [39.55748992685093]
We propose a novel lightweight network called Coding Prior-Guided Super-Resolution (CPGSR) to address the SR challenges in compressed game video content.
Inspired by the quantization in video coding, we propose a partitioned focal frequency loss to effectively guide the model's focus on preserving high-frequency information.
arXiv Detail & Related papers (2025-01-03T12:01:36Z) - REDUCIO! Generating 1024$\times$1024 Video within 16 Seconds using Extremely Compressed Motion Latents [110.41795676048835]
One crucial obstacle for large-scale applications is the expensive training and inference cost.
In this paper, we argue that videos contain much more redundant information than images, thus can be encoded by very few motion latents.
We train Reducio-DiT in around 3.2K training hours in total and generate a 16-frame 1024*1024 video clip within 15.5 seconds on a single A100 GPU.
arXiv Detail & Related papers (2024-11-20T18:59:52Z) - DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models [9.145545884814327]
This paper introduces a method for zero-shot video restoration using pre-trained image restoration diffusion models.
We show that our method achieves top performance in zero-shot video restoration.
Our technique works with any 2D restoration diffusion model, offering a versatile and powerful tool for video enhancement tasks without extensive retraining.
arXiv Detail & Related papers (2024-07-01T17:59:12Z) - Real-Time 4K Super-Resolution of Compressed AVIF Images. AIS 2024 Challenge Survey [116.29700317843043]
This paper introduces a novel benchmark as part of the AIS 2024 Real-Time Image Super-Resolution Challenge.
It aims to upscale compressed images from 540p to 4K resolution in real-time on commercial GPUs.
We use a diverse test set containing a variety of 4K images ranging from digital art to gaming and photography.
arXiv Detail & Related papers (2024-04-25T10:12:42Z) - VCISR: Blind Single Image Super-Resolution with Video Compression
Synthetic Data [18.877077302923713]
We present a video compression-based degradation model to synthesize low-resolution image data in the blind SISR task.
Our proposed image synthesizing method is widely applicable to existing image datasets.
By introducing video coding artifacts to SISR degradation models, neural networks can super-resolve images with the ability to restore video compression degradations.
arXiv Detail & Related papers (2023-11-02T05:24:19Z) - Enabling Real-time Neural Recovery for Cloud Gaming on Mobile Devices [11.530719133935847]
We propose a new method for recovering lost or corrupted video frames in cloud gaming.
Unlike traditional video frame recovery, our approach uses game states to significantly enhance recovery accuracy.
We develop a holistic system that consists of (i) efficiently extracting game states, (ii) modifying H.264 video decoder to generate a mask to indicate which portions of video frames need recovery, and (iii) designing a novel neural network to recover either complete or partial video frames.
arXiv Detail & Related papers (2023-07-15T16:45:01Z) - Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and
Restoration [71.6879432974126]
In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution.
We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution.
Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR.
arXiv Detail & Related papers (2022-09-22T23:25:08Z) - On the Generalization of BasicVSR++ to Video Deblurring and Denoising [98.99165593274304]
We extend BasicVSR++ to a generic framework for video restoration tasks.
In tasks where inputs and outputs possess identical spatial size, the input resolution is reduced by strided convolutions to maintain efficiency.
With only minimal changes from BasicVSR++, the proposed framework achieves compelling performance with great efficiency in various video restoration tasks.
arXiv Detail & Related papers (2022-04-11T17:59:56Z) - BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation
and Alignment [90.81396836308085]
We show that by empowering recurrent framework with enhanced propagation and alignment, one can exploit video information more effectively.
Our model BasicVSR++ surpasses BasicVSR by 0.82 dB in PSNR with similar number of parameters.
BasicVSR++ generalizes well to other video restoration tasks such as compressed video enhancement.
arXiv Detail & Related papers (2021-04-27T17:58:31Z) - AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and
Results [96.74919503142014]
This paper reviews the video extreme super-resolution challenge associated with the AIM 2020 workshop at ECCV 2020.
Track 1 is set up to gauge the state-of-the-art for such a demanding task, where fidelity to the ground truth is measured by PSNR and SSIM.
Track 2 therefore aims at generating visually pleasing results, which are ranked according to human perception, evaluated by a user study.
arXiv Detail & Related papers (2020-09-14T09:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.