Invertible Image Rescaling
- URL: http://arxiv.org/abs/2005.05650v1
- Date: Tue, 12 May 2020 09:55:53 GMT
- Title: Invertible Image Rescaling
- Authors: Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin
Ke, Jiang Bian, Zhouchen Lin, and Tie-Yan Liu
- Abstract summary: We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
- Score: 118.2653765756915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-resolution digital images are usually downscaled to fit various display
screens or save the cost of storage and bandwidth, meanwhile the post-upscaling
is adpoted to recover the original resolutions or the details in the zoom-in
images. However, typical image downscaling is a non-injective mapping due to
the loss of high-frequency information, which leads to the ill-posed problem of
the inverse upscaling procedure and poses great challenges for recovering
details from the downscaled low-resolution images. Simply upscaling with image
super-resolution methods results in unsatisfactory recovering performance. In
this work, we propose to solve this problem by modeling the downscaling and
upscaling processes from a new perspective, i.e. an invertible bijective
transformation, which can largely mitigate the ill-posed nature of image
upscaling. We develop an Invertible Rescaling Net (IRN) with deliberately
designed framework and objectives to produce visually-pleasing low-resolution
images and meanwhile capture the distribution of the lost information using a
latent variable following a specified distribution in the downscaling process.
In this way, upscaling is made tractable by inversely passing a randomly-drawn
latent variable with the low-resolution image through the network. Experimental
results demonstrate the significant improvement of our model over existing
methods in terms of both quantitative and qualitative evaluations of image
upscaling reconstruction from downscaled images.
Related papers
- Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - Suppressing Uncertainties in Degradation Estimation for Blind Super-Resolution [31.89605287039615]
The problem of blind image super-resolution aims to recover high-resolution (HR) images from low-resolution (LR) images with unknown degradation modes.
Most existing methods model the image degradation process using blur kernels.
We propose an textbfUncertainty-based degradation representation for blind textbfSuper-textbfResolution framework.
arXiv Detail & Related papers (2024-06-24T08:58:43Z) - CasSR: Activating Image Power for Real-World Image Super-Resolution [24.152495730507823]
Cascaded diffusion for Super-Resolution, CasSR, is a novel method designed to produce highly detailed and realistic images.
We develop a cascaded controllable diffusion model that aims to optimize the extraction of information from low-resolution images.
arXiv Detail & Related papers (2024-03-18T03:59:43Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Uncovering the Over-smoothing Challenge in Image Super-Resolution: Entropy-based Quantification and Contrastive Optimization [67.99082021804145]
We propose an explicit solution to the COO problem, called Detail Enhanced Contrastive Loss (DECLoss)
DECLoss utilizes the clustering property of contrastive learning to directly reduce the variance of the potential high-resolution distribution.
We evaluate DECLoss on multiple super-resolution benchmarks and demonstrate that it improves the perceptual quality of PSNR-oriented models.
arXiv Detail & Related papers (2022-01-04T08:30:09Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.