Super-Resolution by Predicting Offsets: An Ultra-Efficient
Super-Resolution Network for Rasterized Images
- URL: http://arxiv.org/abs/2210.04198v1
- Date: Sun, 9 Oct 2022 08:16:36 GMT
- Title: Super-Resolution by Predicting Offsets: An Ultra-Efficient
Super-Resolution Network for Rasterized Images
- Authors: Jinjin Gu, Haoming Cai, Chenyu Dong, Ruofan Zhang, Yulun Zhang,
Wenming Yang, Chun Yuan
- Abstract summary: We present a new method for real-time SR for computer graphics, namely Super-Resolution by Predicting Offsets (SRPO)
Our algorithm divides the image into two parts for processing, i.e., sharp edges and flatter areas.
Experiments show that the proposed SRPO can achieve superior visual effects at a smaller computational cost than the existing state-of-the-art methods.
- Score: 47.684307267915024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rendering high-resolution (HR) graphics brings substantial computational
costs. Efficient graphics super-resolution (SR) methods may achieve HR
rendering with small computing resources and have attracted extensive research
interests in industry and research communities. We present a new method for
real-time SR for computer graphics, namely Super-Resolution by Predicting
Offsets (SRPO). Our algorithm divides the image into two parts for processing,
i.e., sharp edges and flatter areas. For edges, different from the previous SR
methods that take the anti-aliased images as inputs, our proposed SRPO takes
advantage of the characteristics of rasterized images to conduct SR on the
rasterized images. To complement the residual between HR and low-resolution
(LR) rasterized images, we train an ultra-efficient network to predict the
offset maps to move the appropriate surrounding pixels to the new positions.
For flat areas, we found simple interpolation methods can already generate
reasonable output. We finally use a guided fusion operation to integrate the
sharp edges generated by the network and flat areas by the interpolation method
to get the final SR image. The proposed network only contains 8,434 parameters
and can be accelerated by network quantization. Extensive experiments show that
the proposed SRPO can achieve superior visual effects at a smaller
computational cost than the existing state-of-the-art methods.
Related papers
- Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Deep Posterior Distribution-based Embedding for Hyperspectral Image
Super-resolution [75.24345439401166]
This paper focuses on how to embed the high-dimensional spatial-spectral information of hyperspectral (HS) images efficiently and effectively.
We formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events.
Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable.
Experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-30T06:59:01Z) - Deep Cyclic Generative Adversarial Residual Convolutional Networks for
Real Image Super-Resolution [20.537597542144916]
We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions.
We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation.
arXiv Detail & Related papers (2020-09-07T11:11:18Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.