Cross-Scale Internal Graph Neural Network for Image Super-Resolution
- URL: http://arxiv.org/abs/2006.16673v2
- Date: Tue, 20 Oct 2020 13:59:29 GMT
- Title: Cross-Scale Internal Graph Neural Network for Image Super-Resolution
- Authors: Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, Chen Change Loy
- Abstract summary: Non-local self-similarity in natural images has been well studied as an effective prior in image restoration.
For single image super-resolution (SISR), most existing deep non-local methods only exploit similar patches within the same scale of the low-resolution (LR) input image.
This is achieved using a novel cross-scale internal graph neural network (IGNN)
- Score: 147.77050877373674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-local self-similarity in natural images has been well studied as an
effective prior in image restoration. However, for single image
super-resolution (SISR), most existing deep non-local methods (e.g., non-local
neural networks) only exploit similar patches within the same scale of the
low-resolution (LR) input image. Consequently, the restoration is limited to
using the same-scale information while neglecting potential high-resolution
(HR) cues from other scales. In this paper, we explore the cross-scale patch
recurrence property of a natural image, i.e., similar patches tend to recur
many times across different scales. This is achieved using a novel cross-scale
internal graph neural network (IGNN). Specifically, we dynamically construct a
cross-scale graph by searching k-nearest neighboring patches in the downsampled
LR image for each query patch in the LR image. We then obtain the corresponding
k HR neighboring patches in the LR image and aggregate them adaptively in
accordance to the edge label of the constructed graph. In this way, the HR
information can be passed from k HR neighboring patches to the LR query patch
to help it recover more detailed textures. Besides, these internal
image-specific LR/HR exemplars are also significant complements to the external
information learned from the training dataset. Extensive experiments
demonstrate the effectiveness of IGNN against the state-of-the-art SISR methods
including existing non-local networks on standard benchmarks.
Related papers
- Towards a Sampling Theory for Implicit Neural Representations [0.3222802562733786]
Implicit neural representations (INRs) have emerged as a powerful tool for solving inverse problems in computer and computational imaging.
We show how to recover images from a hidden-layer INR using a generalized form of weight decay regularization.
We empirically assess the probability of achieving exact recovery images realized by low-width single-layer INRs, and illustrate the performance of INR on super-resolution recovery of more realistic continuous domain phantom images.
arXiv Detail & Related papers (2024-05-28T17:53:47Z) - Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Enhancing Image Rescaling using Dual Latent Variables in Invertible
Neural Network [42.18106162158025]
A new downscaling latent variable is introduced to model variations in the image downscaling process.
It can improve image upscaling accuracy consistently without sacrificing image quality in downscaled LR images.
It is also shown to be effective in enhancing other INN-based models for image restoration applications like image hiding.
arXiv Detail & Related papers (2022-07-24T23:12:51Z) - Memory Efficient Patch-based Training for INR-based GANs [13.19626131847784]
Training existing approaches require a heavy computational cost proportional to the image resolution.
We propose a multi-stage patch-based training, a novel and scalable approach that can train INR-based GANs with a flexible computational cost.
Specifically, our method allows to generate and discriminate by patch to learn the local details of the image and learn global structural information.
arXiv Detail & Related papers (2022-07-04T13:28:53Z) - An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic
Resonance Image using Implicit Neural Representation [37.43985628701494]
High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis.
Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input.
We propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images.
arXiv Detail & Related papers (2021-10-27T14:48:54Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single
Image Super-Resolution and Beyond [75.37541439447314]
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version.
This paper proposes a linearly-assembled pixel-adaptive regression network (LAPAR) to strike a sweet spot of deep model complexity and resulting SISR quality.
arXiv Detail & Related papers (2021-05-21T15:47:18Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.