Invertible Image Rescaling
- URL: http://arxiv.org/abs/2005.05650v1
- Date: Tue, 12 May 2020 09:55:53 GMT
- Title: Invertible Image Rescaling
- Authors: Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin
Ke, Jiang Bian, Zhouchen Lin, and Tie-Yan Liu
- Abstract summary: We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
- Score: 118.2653765756915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-resolution digital images are usually downscaled to fit various display
screens or save the cost of storage and bandwidth, meanwhile the post-upscaling
is adpoted to recover the original resolutions or the details in the zoom-in
images. However, typical image downscaling is a non-injective mapping due to
the loss of high-frequency information, which leads to the ill-posed problem of
the inverse upscaling procedure and poses great challenges for recovering
details from the downscaled low-resolution images. Simply upscaling with image
super-resolution methods results in unsatisfactory recovering performance. In
this work, we propose to solve this problem by modeling the downscaling and
upscaling processes from a new perspective, i.e. an invertible bijective
transformation, which can largely mitigate the ill-posed nature of image
upscaling. We develop an Invertible Rescaling Net (IRN) with deliberately
designed framework and objectives to produce visually-pleasing low-resolution
images and meanwhile capture the distribution of the lost information using a
latent variable following a specified distribution in the downscaling process.
In this way, upscaling is made tractable by inversely passing a randomly-drawn
latent variable with the low-resolution image through the network. Experimental
results demonstrate the significant improvement of our model over existing
methods in terms of both quantitative and qualitative evaluations of image
upscaling reconstruction from downscaled images.
Related papers
- One-step Generative Diffusion for Realistic Extreme Image Rescaling [47.89362819768323]
We propose a novel framework called One-Step Image Rescaling Diffusion (OSIRDiff) for extreme image rescaling.
OSIRDiff performs rescaling operations in the latent space of a pre-trained autoencoder.
It effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - CasSR: Activating Image Power for Real-World Image Super-Resolution [24.152495730507823]
Cascaded diffusion for Super-Resolution, CasSR, is a novel method designed to produce highly detailed and realistic images.
We develop a cascaded controllable diffusion model that aims to optimize the extraction of information from low-resolution images.
arXiv Detail & Related papers (2024-03-18T03:59:43Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Uncovering the Over-smoothing Challenge in Image Super-Resolution: Entropy-based Quantification and Contrastive Optimization [67.99082021804145]
We propose an explicit solution to the COO problem, called Detail Enhanced Contrastive Loss (DECLoss)
DECLoss utilizes the clustering property of contrastive learning to directly reduce the variance of the potential high-resolution distribution.
We evaluate DECLoss on multiple super-resolution benchmarks and demonstrate that it improves the perceptual quality of PSNR-oriented models.
arXiv Detail & Related papers (2022-01-04T08:30:09Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.