From Face to Natural Image: Learning Real Degradation for Blind Image
Super-Resolution
- URL: http://arxiv.org/abs/2210.00752v1
- Date: Mon, 3 Oct 2022 08:09:21 GMT
- Title: From Face to Natural Image: Learning Real Degradation for Blind Image
Super-Resolution
- Authors: Xiaoming Li, Chaofeng Chen, Xianhui Lin, Wangmeng Zuo, Lei Zhang
- Abstract summary: We design training pairs for super-resolving the real-world low-quality (LQ) images.
We take paired HQ and LQ face images as inputs to explicitly predict degradation-aware and content-independent representations.
We then transfer these real degradation representations from face to natural images to synthesize the degraded LQ natural images.
- Score: 72.68156760273578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing proper training pairs is critical for super-resolving the
real-world low-quality (LQ) images, yet suffers from the difficulties in either
acquiring paired ground-truth HQ images or synthesizing photo-realistic
degraded observations. Recent works mainly circumvent this by simulating the
degradation with handcrafted or estimated degradation parameters. However,
existing synthetic degradation models are incapable to model complicated real
degradation types, resulting in limited improvement on these scenarios, \eg,
old photos. Notably, face images, which have the same degradation process with
the natural images, can be robustly restored with photo-realistic textures by
exploiting their specific structure priors. In this work, we use these
real-world LQ face images and their restored HQ counterparts to model the
complex real degradation (namely ReDegNet), and then transfer it to HQ natural
images to synthesize their realistic LQ ones. Specifically, we take these
paired HQ and LQ face images as inputs to explicitly predict the
degradation-aware and content-independent representations, which control the
degraded image generation. Subsequently, we transfer these real degradation
representations from face to natural images to synthesize the degraded LQ
natural images. Experiments show that our ReDegNet can well learn the real
degradation process from face images, and the restoration network trained with
our synthetic pairs performs favorably against SOTAs. More importantly, our
method provides a new manner to handle the unsynthesizable real-world scenarios
by learning their degradation representations through face images within them,
which can be used for specifically fine-tuning. The source code is available at
https://github.com/csxmli2016/ReDegNet.
Related papers
- Towards Unsupervised Blind Face Restoration using Diffusion Prior [12.69610609088771]
Blind face restoration methods have shown remarkable performance when trained on large-scale synthetic datasets with supervised learning.
These datasets are often generated by simulating low-quality face images with a handcrafted image degradation pipeline.
In this paper, we address this issue by using only a set of input images, with unknown degradations and without ground truth targets, to fine-tune a restoration model.
Our best model also achieves the state-of-the-art results on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-10-06T20:38:14Z) - SSL: A Self-similarity Loss for Improving Generative Image Super-resolution [11.94842557256442]
Generative adversarial networks (GAN) and generative diffusion models (DM) have been widely used in real-world image super-resolution (Real-ISR)
These generative models are prone to generating visual artifacts and false image structures, resulting in unnatural Real-ISR results.
We propose a simple yet effective self-similarity loss (SSL) to improve the performance of generative Real-ISR models.
arXiv Detail & Related papers (2024-08-11T07:46:06Z) - Preserving Full Degradation Details for Blind Image Super-Resolution [40.152015542099704]
We propose an alternative to learn degradation representations through reproducing degraded low-resolution (LR) images.
By guiding the degrader to reconstruct input LR images, full degradation information can be encoded into the representations.
Experiments show that our representations can extract accurate and highly robust degradation information.
arXiv Detail & Related papers (2024-07-01T13:54:59Z) - MetaF2N: Blind Image Super-Resolution by Learning Efficient Model
Adaptation from Faces [51.42949911178461]
We propose a method dubbed MetaF2N to fine-tune model parameters for adapting to the whole Natural image in a Meta-learning framework.
Considering the gaps between the recovered faces and ground-truths, we deploy a MaskNet for adaptively predicting loss weights at different positions to reduce the impact of low-confidence areas.
arXiv Detail & Related papers (2023-09-15T02:45:21Z) - Physics-Driven Turbulence Image Restoration with Stochastic Refinement [80.79900297089176]
Image distortion by atmospheric turbulence is a critical problem in long-range optical imaging systems.
Fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions.
This paper proposes the Physics-integrated Restoration Network (PiRN) to help the network to disentangle theity from the degradation and the underlying image.
arXiv Detail & Related papers (2023-07-20T05:49:21Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.