FuseSR: Super Resolution for Real-time Rendering through Efficient
Multi-resolution Fusion
- URL: http://arxiv.org/abs/2310.09726v1
- Date: Sun, 15 Oct 2023 04:01:05 GMT
- Title: FuseSR: Super Resolution for Real-time Rendering through Efficient
Multi-resolution Fusion
- Authors: Zhihua Zhong, Jingsen Zhu, Yuxin Dai, Chuankun Zheng, Yuchi Huo,
Guanlin Chen, Hujun Bao, Rui Wang
- Abstract summary: One of the most popular solutions is to render images at a low resolution to reduce rendering overhead.
In this paper, we propose an efficient and effective super-resolution method that predicts high-quality upsampled reconstructions.
Experiments show that our method is able to produce temporally consistent reconstructions in $4 times 4$ and even challenging $8 times 8$ upsampling cases at 4K resolution with real-time performance.
- Score: 38.67110413800048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The workload of real-time rendering is steeply increasing as the demand for
high resolution, high refresh rates, and high realism rises, overwhelming most
graphics cards. To mitigate this problem, one of the most popular solutions is
to render images at a low resolution to reduce rendering overhead, and then
manage to accurately upsample the low-resolution rendered image to the target
resolution, a.k.a. super-resolution techniques. Most existing methods focus on
exploiting information from low-resolution inputs, such as historical frames.
The absence of high frequency details in those LR inputs makes them hard to
recover fine details in their high-resolution predictions. In this paper, we
propose an efficient and effective super-resolution method that predicts
high-quality upsampled reconstructions utilizing low-cost high-resolution
auxiliary G-Buffers as additional input. With LR images and HR G-buffers as
input, the network requires to align and fuse features at multi resolution
levels. We introduce an efficient and effective H-Net architecture to solve
this problem and significantly reduce rendering overhead without noticeable
quality deterioration. Experiments show that our method is able to produce
temporally consistent reconstructions in $4 \times 4$ and even challenging $8
\times 8$ upsampling cases at 4K resolution with real-time performance, with
substantially improved quality and significant performance boost compared to
existing works.
Related papers
- UltraPixel: Advancing Ultra-High-Resolution Image Synthesis to New Peaks [36.61645124563195]
We present UltraPixel, a novel architecture utilizing cascade diffusion models to generate high-quality images at multiple resolutions.
We use semantics-rich representations of lower-resolution images in the later denoising stage to guide the whole generation of highly detailed high-resolution images.
Our model achieves fast training with reduced data requirements, producing photo-realistic high-resolution images.
arXiv Detail & Related papers (2024-07-02T11:02:19Z) - Auxiliary Features-Guided Super Resolution for Monte Carlo Rendering [8.54858933529271]
Super resolution to reduce the number of pixels to render and thus speed up Monte Carlo rendering algorithms.
We exploit high-resolution auxiliary features to guide super resolution of low-resolution renderings.
Our experiments show that our auxiliary features-guided super-resolution method outperforms both super-resolution methods and Monte Carlo denoising methods in producing high-quality renderings.
arXiv Detail & Related papers (2023-10-20T02:45:13Z) - Rethinking Resolution in the Context of Efficient Video Recognition [49.957690643214576]
Cross-resolution KD (ResKD) is a simple but effective method to boost recognition accuracy on low-resolution frames.
We extensively demonstrate its effectiveness over state-of-the-art architectures, i.e., 3D-CNNs and Video Transformers.
arXiv Detail & Related papers (2022-09-26T15:50:44Z) - Efficient High-Resolution Deep Learning: A Survey [90.76576712433595]
Cameras in modern devices such as smartphones, satellites and medical equipment are capable of capturing very high resolution images and videos.
Such high-resolution data often need to be processed by deep learning models for cancer detection, automated road navigation, weather prediction, surveillance, optimizing agricultural processes and many other applications.
Using high-resolution images and videos as direct inputs for deep learning models creates many challenges due to their high number of parameters, computation cost, inference latency and GPU memory consumption.
Several works in the literature propose better alternatives in order to deal with the challenges of high-resolution data and improve accuracy and speed while complying with hardware limitations
arXiv Detail & Related papers (2022-07-26T17:13:53Z) - SwiftSRGAN -- Rethinking Super-Resolution for Efficient and Real-time
Inference [0.0]
We present an architecture that is faster and smaller in terms of its memory footprint.
A real-time super-resolution enables streaming high resolution media content even under poor bandwidth conditions.
arXiv Detail & Related papers (2021-11-29T04:20:15Z) - Projected GANs Converge Faster [50.23237734403834]
Generative Adversarial Networks (GANs) produce high-quality images but are challenging to train.
We make significant headway on these issues by projecting generated and real samples into a fixed, pretrained feature space.
Our Projected GAN improves image quality, sample efficiency, and convergence speed.
arXiv Detail & Related papers (2021-11-01T15:11:01Z) - Generating Superpixels for High-resolution Images with Decoupled Patch
Calibration [82.21559299694555]
Patch Networks (PCNet) is designed to efficiently and accurately implement high-resolution superpixel segmentation.
DPC takes a local patch from the high-resolution images and dynamically generates a binary mask to impose the network to focus on region boundaries.
In particular, DPC takes a local patch from the high-resolution images and dynamically generates a binary mask to impose the network to focus on region boundaries.
arXiv Detail & Related papers (2021-08-19T10:33:05Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Blind Image Super-Resolution with Spatial Context Hallucination [5.849485167287474]
We propose a novel Spatial Context Hallucination Network (SCHN) for blind super-resolution without knowing the degradation kernel.
We train our model on two high quality datasets, DIV2K and Flickr2K.
Our method performs better than state-of-the-art methods when input images are corrupted with random blur and noise.
arXiv Detail & Related papers (2020-09-25T22:36:07Z) - ImagePairs: Realistic Super Resolution Dataset via Beam Splitter Camera
Rig [13.925480922578869]
We propose a new data acquisition technique for gathering real image data set.
We use a beam-splitter to capture the same scene by a low resolution camera and a high resolution camera.
Unlike current small-scale dataset used for these tasks, our proposed dataset includes 11,421 pairs of low-resolution high-resolution images.
arXiv Detail & Related papers (2020-04-18T03:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.