Scale-arbitrary Invertible Image Downscaling
- URL: http://arxiv.org/abs/2201.12576v1
- Date: Sat, 29 Jan 2022 12:27:52 GMT
- Title: Scale-arbitrary Invertible Image Downscaling
- Authors: Jinbo Xing, Wenbo Hu, Tien-Tsin Wong
- Abstract summary: We propose a scale-Arbitrary Invertible image Downscaling Network (AIDN) to downscale HR images with arbitrary scale factors.
Our AIDN achieves top performance for invertible downscaling with both arbitrary integer and non-integer scale factors.
- Score: 17.67415618760949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Downscaling is indispensable when distributing high-resolution (HR) images
over the Internet to fit the displays of various resolutions, while upscaling
is also necessary when users want to see details of the distributed images.
Recent invertible image downscaling methods jointly model these two problems
and achieve significant improvements. However, they only consider fixed integer
scale factors that cannot meet the requirement of conveniently fitting the
displays of various resolutions in real-world applications. In this paper, we
propose a scale-Arbitrary Invertible image Downscaling Network (AIDN), to
natively downscale HR images with arbitrary scale factors for fitting various
target resolutions. Meanwhile, the HR information is embedded in the downscaled
low-resolution (LR) counterparts in a nearly imperceptible form such that our
AIDN can also restore the original HR images solely from the LR images. The key
to supporting arbitrary scale factors is our proposed Conditional Resampling
Module (CRM) that conditions the downscaling/upscaling kernels and sampling
locations on both scale factors and image content. Extensive experimental
results demonstrate that our AIDN achieves top performance for invertible
downscaling with both arbitrary integer and non-integer scale factors.
Related papers
- Learning Dual-Level Deformable Implicit Representation for Real-World Scale Arbitrary Super-Resolution [81.74583887661794]
We build a new real-world super-resolution benchmark with both integer and non-integer scaling factors.
We propose a Dual-level Deformable Implicit Representation (DDIR) to solve real-world scale arbitrary super-resolution.
Our trained model achieves state-of-the-art performance on the RealArbiSR and RealSR benchmarks for real-world scale arbitrary super-resolution.
arXiv Detail & Related papers (2024-03-16T13:44:42Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Enhancing Image Rescaling using Dual Latent Variables in Invertible
Neural Network [42.18106162158025]
A new downscaling latent variable is introduced to model variations in the image downscaling process.
It can improve image upscaling accuracy consistently without sacrificing image quality in downscaled LR images.
It is also shown to be effective in enhancing other INN-based models for image restoration applications like image hiding.
arXiv Detail & Related papers (2022-07-24T23:12:51Z) - Hierarchical Conditional Flow: A Unified Framework for Image
Super-Resolution and Image Rescaling [139.25215100378284]
We propose a hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling.
HCFlow learns a mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously.
To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training.
arXiv Detail & Related papers (2021-08-11T16:11:01Z) - Robust Reference-based Super-Resolution via C2-Matching [77.51610726936657]
Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image.
Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images.
We propose C2-Matching, which produces explicit robust matching crossing transformation and resolution.
arXiv Detail & Related papers (2021-06-03T16:40:36Z) - SRWarp: Generalized Image Super-Resolution under Arbitrary
Transformation [65.88321755969677]
Deep CNNs have achieved significant successes in image processing and its applications, including single image super-resolution.
Recent approaches extend the scope to real-valued upsampling factors.
We propose the SRWarp framework to further generalize the SR tasks toward an arbitrary image transformation.
arXiv Detail & Related papers (2021-04-21T02:50:41Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.