Normalizing Flow as a Flexible Fidelity Objective for Photo-Realistic
Super-resolution
- URL: http://arxiv.org/abs/2111.03649v1
- Date: Fri, 5 Nov 2021 17:56:51 GMT
- Title: Normalizing Flow as a Flexible Fidelity Objective for Photo-Realistic
Super-resolution
- Authors: Andreas Lugmayr, Martin Danelljan, Fisher Yu, Luc Van Gool, Radu
Timofte
- Abstract summary: Super-resolution is an ill-posed problem, where a ground-truth high-resolution image represents only one possibility in the space of plausible solutions.
Yet, the dominant paradigm is to employ pixel-wise losses, such as L_, which drive the prediction towards a blurry average.
We address this issue by revisiting the L_ loss and show that it corresponds to a one-layer conditional flow.
Inspired by this relation, we explore general flows as a fidelity-based alternative to the L_ objective.
We demonstrate that the flexibility of deeper flows leads to better visual quality and consistency when combined with adversarial losses.
- Score: 161.39504409401354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Super-resolution is an ill-posed problem, where a ground-truth
high-resolution image represents only one possibility in the space of plausible
solutions. Yet, the dominant paradigm is to employ pixel-wise losses, such as
L_1, which drive the prediction towards a blurry average. This leads to
fundamentally conflicting objectives when combined with adversarial losses,
which degrades the final quality. We address this issue by revisiting the L_1
loss and show that it corresponds to a one-layer conditional flow. Inspired by
this relation, we explore general flows as a fidelity-based alternative to the
L_1 objective. We demonstrate that the flexibility of deeper flows leads to
better visual quality and consistency when combined with adversarial losses. We
conduct extensive user studies for three datasets and scale factors, where our
approach is shown to outperform state-of-the-art methods for photo-realistic
super-resolution. Code and trained models will be available at:
git.io/AdFlow
Related papers
- Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - On the Robustness of Normalizing Flows for Inverse Problems in Imaging [16.18759484251522]
Unintended severe artifacts are occasionally observed in the output of Conditional normalizing flows.
We empirically and theoretically reveal that these problems are caused by exploding variance'' in the conditional affine coupling layer.
We suggest a simple remedy that substitutes the affine coupling layers with the modified rational quadratic spline coupling layers in normalizing flows.
arXiv Detail & Related papers (2022-12-08T15:18:28Z) - DeepMLE: A Robust Deep Maximum Likelihood Estimator for Two-view
Structure from Motion [9.294501649791016]
Two-view structure from motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM (vSLAM)
We formulate the two-view SfM problem as a maximum likelihood estimation (MLE) and solve it with the proposed framework, denoted as DeepMLE.
Our method significantly outperforms the state-of-the-art end-to-end two-view SfM approaches in accuracy and generalization capability.
arXiv Detail & Related papers (2022-10-11T15:07:25Z) - SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - SRFlow: Learning the Super-Resolution Space with Normalizing Flow [176.07982398988747]
Super-resolution is an ill-posed problem, since it allows for multiple predictions for a given low-resolution image.
We propose SRFlow: a normalizing flow based super-resolution method capable of learning the conditional distribution of the output.
Our model is trained in a principled manner using a single loss, namely the negative log-likelihood.
arXiv Detail & Related papers (2020-06-25T06:34:04Z) - Enhancing Perceptual Loss with Adversarial Feature Matching for
Super-Resolution [5.258555266148511]
Single image super-resolution (SISR) is an ill-posed problem with an indeterminate number of valid solutions.
We show that the root cause of these pattern artifacts can be traced back to a mismatch between the pre-training objective of perceptual loss and the super-resolved objective.
arXiv Detail & Related papers (2020-05-15T12:36:54Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.