Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN
- URL: http://arxiv.org/abs/2201.06383v1
- Date: Mon, 17 Jan 2022 12:42:56 GMT
- Title: Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN
- Authors: Jie Song and Huawei Yi and Wenqian Xu and Xiaohui Li and Bo Li and
Yuanyuan Liu
- Abstract summary: This paper proposes a method called Dual Perceptual Loss (DP Loss) to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction.
Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously.
The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.
- Score: 13.335546116599494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proposal of perceptual loss solves the problem that per-pixel difference
loss function causes the reconstructed image to be overly-smooth, which
acquires a significant progress in the field of single image super-resolution
reconstruction. Furthermore, the generative adversarial networks (GAN) is
applied to the super-resolution field, which effectively improves the visual
quality of the reconstructed image. However, under the condtion of high
upscaling factors, the excessive abnormal reasoning of the network produces
some distorted structures, so that there is a certain deviation between the
reconstructed image and the ground-truth image. In order to fundamentally
improve the quality of reconstructed images, this paper proposes a effective
method called Dual Perceptual Loss (DP Loss), which is used to replace the
original perceptual loss to solve the problem of single image super-resolution
reconstruction. Due to the complementary property between the VGG features and
the ResNet features, the proposed DP Loss considers the advantages of learning
two features simultaneously, which significantly improves the reconstruction
effect of images. The qualitative and quantitative analysis on benchmark
datasets demonstrates the superiority of our proposed method over
state-of-the-art super-resolution methods.
Related papers
- Spatial-Contextual Discrepancy Information Compensation for GAN
Inversion [67.21442893265973]
We introduce a novel spatial-contextual discrepancy information compensationbased GAN-inversion method (SDIC)
SDIC bridges the gap in image details between the original image and the reconstructed/edited image.
Our proposed method achieves the excellent distortion-editability trade-off at a fast inference speed for both image inversion and editing tasks.
arXiv Detail & Related papers (2023-12-12T08:58:56Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Relationship Quantification of Image Degradations [72.98190570967937]
Degradation Relationship Index (DRI) is defined as the mean drop rate difference in the validation loss between two models.
DRI always predicts performance improvement by using the specific degradation as an auxiliary to train models.
We propose a simple but effective method to estimate whether the given degradation combinations could improve the performance on the anchor degradation.
arXiv Detail & Related papers (2022-12-08T09:05:19Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - SRR-Net: A Super-Resolution-Involved Reconstruction Method for High
Resolution MR Imaging [7.42807471627113]
The proposed SRR-Net is capable of recovering high-resolution brain images with both good visual quality and perceptual quality.
Experiment results using in-vivo HR multi-coil brain data indicate that the proposed SRR-Net is capable of recovering high-resolution brain images.
arXiv Detail & Related papers (2021-04-13T02:19:12Z) - Enhancing Perceptual Loss with Adversarial Feature Matching for
Super-Resolution [5.258555266148511]
Single image super-resolution (SISR) is an ill-posed problem with an indeterminate number of valid solutions.
We show that the root cause of these pattern artifacts can be traced back to a mismatch between the pre-training objective of perceptual loss and the super-resolved objective.
arXiv Detail & Related papers (2020-05-15T12:36:54Z) - Structure-Preserving Super Resolution with Gradient Guidance [87.79271975960764]
Structures matter in single image super resolution (SISR)
Recent studies benefiting from generative adversarial network (GAN) have promoted the development of SISR.
However, there are always undesired structural distortions in the recovered images.
arXiv Detail & Related papers (2020-03-29T17:26:58Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.