Local Implicit Normalizing Flow for Arbitrary-Scale Image
Super-Resolution
- URL: http://arxiv.org/abs/2303.05156v3
- Date: Thu, 13 Jul 2023 06:27:11 GMT
- Title: Local Implicit Normalizing Flow for Arbitrary-Scale Image
Super-Resolution
- Authors: Jie-En Yao, Li-Yuan Tsao, Yi-Chen Lo, Roy Tseng, Chia-Che Chang,
Chun-Yi Lee
- Abstract summary: We propose "Local Implicit Normalizing Flow" (LINF) as a unified solution to the above problems.
LINF models the distribution of texture details under different scaling factors with normalizing flow.
We show that LINF achieves the state-of-the-art perceptual quality compared with prior arbitrary-scale SR methods.
- Score: 11.653044501483667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Flow-based methods have demonstrated promising results in addressing the
ill-posed nature of super-resolution (SR) by learning the distribution of
high-resolution (HR) images with the normalizing flow. However, these methods
can only perform a predefined fixed-scale SR, limiting their potential in
real-world applications. Meanwhile, arbitrary-scale SR has gained more
attention and achieved great progress. Nonetheless, previous arbitrary-scale SR
methods ignore the ill-posed problem and train the model with per-pixel L1
loss, leading to blurry SR outputs. In this work, we propose "Local Implicit
Normalizing Flow" (LINF) as a unified solution to the above problems. LINF
models the distribution of texture details under different scaling factors with
normalizing flow. Thus, LINF can generate photo-realistic HR images with rich
texture details in arbitrary scale factors. We evaluate LINF with extensive
experiments and show that LINF achieves the state-of-the-art perceptual quality
compared with prior arbitrary-scale SR methods.
Related papers
- Boosting Flow-based Generative Super-Resolution Models via Learned Prior [8.557017814978334]
Flow-based super-resolution (SR) models have demonstrated astonishing capabilities in generating high-quality images.
These methods encounter several challenges during image generation, such as grid artifacts, exploding inverses, and suboptimal results due to a fixed sampling temperature.
This work introduces a conditional learned prior to the inference phase of a flow-based SR model.
arXiv Detail & Related papers (2024-03-16T18:04:12Z) - LFSRDiff: Light Field Image Super-Resolution via Diffusion Models [18.20217829625834]
Light field (LF) image super-resolution (SR) is a challenging problem due to its inherent ill-posed nature.
mainstream LF image SR methods typically adopt a deterministic approach, generating only a single output supervised by pixel-wise loss functions.
We introduce LFSRDiff, the first diffusion-based LF image SR model, by incorporating the LF disentanglement mechanism.
arXiv Detail & Related papers (2023-11-27T07:31:12Z) - Toward Real-World Super-Resolution via Adaptive Downsampling Models [58.38683820192415]
This study proposes a novel method to simulate an unknown downsampling process without imposing restrictive prior knowledge.
We propose a generalizable low-frequency loss (LFL) in the adversarial training framework to imitate the distribution of target LR images without using any paired examples.
arXiv Detail & Related papers (2021-09-08T06:00:32Z) - Hierarchical Conditional Flow: A Unified Framework for Image
Super-Resolution and Image Rescaling [139.25215100378284]
We propose a hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling.
HCFlow learns a mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously.
To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training.
arXiv Detail & Related papers (2021-08-11T16:11:01Z) - SRWarp: Generalized Image Super-Resolution under Arbitrary
Transformation [65.88321755969677]
Deep CNNs have achieved significant successes in image processing and its applications, including single image super-resolution.
Recent approaches extend the scope to real-valued upsampling factors.
We propose the SRWarp framework to further generalize the SR tasks toward an arbitrary image transformation.
arXiv Detail & Related papers (2021-04-21T02:50:41Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - SRFlow: Learning the Super-Resolution Space with Normalizing Flow [176.07982398988747]
Super-resolution is an ill-posed problem, since it allows for multiple predictions for a given low-resolution image.
We propose SRFlow: a normalizing flow based super-resolution method capable of learning the conditional distribution of the output.
Our model is trained in a principled manner using a single loss, namely the negative log-likelihood.
arXiv Detail & Related papers (2020-06-25T06:34:04Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.