Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution
- URL: http://arxiv.org/abs/2203.04960v1
- Date: Sat, 12 Feb 2022 15:37:13 GMT
- Title: Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution
- Authors: Man Zhou, Keyu Yan, Jinshan Pan, Wenqi Ren, Qi Xie, Xiangyong Cao
- Abstract summary: Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
- Score: 67.83489239124557
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Guided image super-resolution (GISR) aims to obtain a high-resolution (HR)
target image by enhancing the spatial resolution of a low-resolution (LR)
target image under the guidance of a HR image. However, previous model-based
methods mainly takes the entire image as a whole, and assume the prior
distribution between the HR target image and the HR guidance image, simply
ignoring many non-local common characteristics between them. To alleviate this
issue, we firstly propose a maximal a posterior (MAP) estimation model for GISR
with two types of prior on the HR target image, i.e., local implicit prior and
global implicit prior. The local implicit prior aims to model the complex
relationship between the HR target image and the HR guidance image from a local
perspective, and the global implicit prior considers the non-local
auto-regression property between the two images from a global perspective.
Secondly, we design a novel alternating optimization algorithm to solve this
model for GISR. The algorithm is in a concise framework that facilitates to be
replicated into commonly used deep network structures. Thirdly, to reduce the
information loss across iterative stages, the persistent memory mechanism is
introduced to augment the information representation by exploiting the Long
short-term memory unit (LSTM) in the image and feature spaces. In this way, a
deep network with certain interpretation and high representation ability is
built. Extensive experimental results validate the superiority of our method on
a variety of GISR tasks, including Pan-sharpening, depth image
super-resolution, and MR image super-resolution.
Related papers
- Semantic Encoder Guided Generative Adversarial Face Ultra-Resolution
Network [15.102899995465041]
We propose a novel face super-resolution method, namely Semantic guided Generative Adversarial Face Ultra-Resolution Network (SEGA-FURN)
The proposed network is composed of a novel semantic encoder that has the ability to capture the embedded semantics to guide adversarial learning and a novel generator that uses a hierarchical architecture named Residual in Internal Block (RIDB)
Experiments on large face datasets have proved that the proposed method can achieve superior super-resolution results and significantly outperform other state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2022-11-18T23:16:57Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Robust Reference-based Super-Resolution via C2-Matching [77.51610726936657]
Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image.
Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images.
We propose C2-Matching, which produces explicit robust matching crossing transformation and resolution.
arXiv Detail & Related papers (2021-06-03T16:40:36Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.