Perceptual Image Super-Resolution with Progressive Adversarial Network
- URL: http://arxiv.org/abs/2003.03756v4
- Date: Thu, 19 Mar 2020 03:13:50 GMT
- Title: Perceptual Image Super-Resolution with Progressive Adversarial Network
- Authors: Lone Wong, Deli Zhao, Shaohua Wan, Bo Zhang
- Abstract summary: Single Image Super-Resolution (SISR) aims to improve resolution of small-size low-quality image from a single one.
In this paper, we argue that the curse of dimensionality is the underlying reason of limiting the performance of state-of-the-art algorithms.
We propose Progressive Adversarial Network (PAN) that is capable of coping with this difficulty for domain-specific image super-resolution.
- Score: 17.289101902846358
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Single Image Super-Resolution (SISR) aims to improve resolution of small-size
low-quality image from a single one. With popularity of consumer electronics in
our daily life, this topic has become more and more attractive. In this paper,
we argue that the curse of dimensionality is the underlying reason of limiting
the performance of state-of-the-art algorithms. To address this issue, we
propose Progressive Adversarial Network (PAN) that is capable of coping with
this difficulty for domain-specific image super-resolution. The key principle
of PAN is that we do not apply any distance-based reconstruction errors as the
loss to be optimized, thus free from the restriction of the curse of
dimensionality. To maintain faithful reconstruction precision, we resort to
U-Net and progressive growing of neural architecture. The low-level features in
encoder can be transferred into decoder to enhance textural details with U-Net.
Progressive growing enhances image resolution gradually, thereby preserving
precision of recovered image. Moreover, to obtain high-fidelity outputs, we
leverage the framework of the powerful StyleGAN to perform adversarial
learning. Without the curse of dimensionality, our model can super-resolve
large-size images with remarkable photo-realistic details and few distortions.
Extensive experiments demonstrate the superiority of our algorithm over
state-of-the-arts both quantitatively and qualitatively.
Related papers
- SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z) - Restormer: Efficient Transformer for High-Resolution Image Restoration [118.9617735769827]
convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data.
Transformers have shown significant performance gains on natural language and high-level vision tasks.
Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks.
arXiv Detail & Related papers (2021-11-18T18:59:10Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Contextual Residual Aggregation for Ultra High-Resolution Image
Inpainting [12.839962012888199]
We propose a Contextual Residual Aggregation (CRA) mechanism that can produce high-frequency residuals for missing contents.
CRA mechanism produces high-frequency residuals for missing contents by weighted aggregating residuals from contextual patches.
We train the proposed model on small images with resolutions 512x512 and perform inference on high-resolution images, achieving compelling inpainting quality.
arXiv Detail & Related papers (2020-05-19T18:55:32Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Unsupervised Real Image Super-Resolution via Generative Variational
AutoEncoder [47.53609520395504]
We revisit the classic example based image super-resolution approaches and come up with a novel generative model for perceptual image super-resolution.
We propose a joint image denoising and super-resolution model via Variational AutoEncoder.
With the aid of the discriminator, an additional overhead of super-resolution subnetwork is attached to super-resolve the denoised image with photo-realistic visual quality.
arXiv Detail & Related papers (2020-04-27T13:49:36Z) - Deep Attentive Generative Adversarial Network for Photo-Realistic Image
De-Quantization [25.805568996596783]
De-quantization can improve the visual quality of low bit-depth image to display on high bit-depth screen.
This paper proposes DAGAN algorithm to perform super-resolution on image intensity resolution.
DenseResAtt module consists of dense residual blocks armed with self-attention mechanism.
arXiv Detail & Related papers (2020-04-07T06:45:01Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.