Learning Omni-frequency Region-adaptive Representations for Real Image
Super-Resolution
- URL: http://arxiv.org/abs/2012.06131v2
- Date: Sun, 10 Jan 2021 06:12:15 GMT
- Title: Learning Omni-frequency Region-adaptive Representations for Real Image
Super-Resolution
- Authors: Xin Li, Xin Jin, Tao Yu, Yingxue Pang, Simeng Sun, Zhizheng Zhang,
Zhibo Chen
- Abstract summary: Key to solving real image super-resolution (RealSR) problem lies in learning feature representations that are both informative and content-aware.
In this paper, we propose an Omni-frequency Region-adaptive Network (ORNet) to address both challenges.
- Score: 37.74756727980146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional single image super-resolution (SISR) methods that focus on
solving single and uniform degradation (i.e., bicubic down-sampling), typically
suffer from poor performance when applied into real-world low-resolution (LR)
images due to the complicated realistic degradations. The key to solving this
more challenging real image super-resolution (RealSR) problem lies in learning
feature representations that are both informative and content-aware. In this
paper, we propose an Omni-frequency Region-adaptive Network (ORNet) to address
both challenges, here we call features of all low, middle and high frequencies
omni-frequency features. Specifically, we start from the frequency perspective
and design a Frequency Decomposition (FD) module to separate different
frequency components to comprehensively compensate the information lost for
real LR image. Then, considering the different regions of real LR image have
different frequency information lost, we further design a Region-adaptive
Frequency Aggregation (RFA) module by leveraging dynamic convolution and
spatial attention to adaptively restore frequency components for different
regions. The extensive experiments endorse the effective, and scenario-agnostic
nature of our OR-Net for RealSR.
Related papers
- FreqINR: Frequency Consistency for Implicit Neural Representation with Adaptive DCT Frequency Loss [5.349799154834945]
This paper introduces Frequency Consistency for Implicit Neural Representation (FreqINR), an innovative Arbitrary-scale Super-resolution method.
During training, we employ Adaptive Discrete Cosine Transform Frequency Loss (ADFL) to minimize the frequency gap between HR and ground-truth images.
During inference, we extend the receptive field to preserve spectral coherence between low-resolution (LR) and ground-truth images.
arXiv Detail & Related papers (2024-08-25T03:53:17Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - A Scale-Arbitrary Image Super-Resolution Network Using Frequency-domain
Information [42.55177009667711]
Image super-resolution (SR) is a technique to recover lost high-frequency information in low-resolution (LR) images.
In this paper, we study image features in the frequency domain to design a novel scale-arbitrary image SR network.
arXiv Detail & Related papers (2022-12-08T15:10:49Z) - FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete
Cosine Transform [16.439669339293747]
Single image super-resolution(SISR) is an ill-posed problem that aims to obtain high-resolution (HR) output from low-resolution (LR) input.
Despite the high peak signal-to-noise ratios(PSNR) results, it is difficult to determine whether the model correctly adds desired high-frequency details.
We propose FreqNet, an intuitive pipeline from the frequency domain perspective, to solve this problem.
arXiv Detail & Related papers (2021-11-21T11:49:12Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.