Perceptual Extreme Super Resolution Network with Receptive Field Block
- URL: http://arxiv.org/abs/2005.12597v1
- Date: Tue, 26 May 2020 09:38:33 GMT
- Title: Perceptual Extreme Super Resolution Network with Receptive Field Block
- Authors: Taizhang Shang, Qiuju Dai, Shengchen Zhu, Tong Yang, Yandong Guo
- Abstract summary: We develop a super resolution network with receptive field block based on Enhanced SRGAN.
RFB-ESRGAN has achieved competitive results in object detection and classification.
- Score: 11.557328975199043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perceptual Extreme Super-Resolution for single image is extremely difficult,
because the texture details of different images vary greatly. To tackle this
difficulty, we develop a super resolution network with receptive field block
based on Enhanced SRGAN. We call our network RFB-ESRGAN. The key contributions
are listed as follows. First, for the purpose of extracting multi-scale
information and enhance the feature discriminability, we applied receptive
field block (RFB) to super resolution. RFB has achieved competitive results in
object detection and classification. Second, instead of using large convolution
kernels in multi-scale receptive field block, several small kernels are used in
RFB, which makes us be able to extract detailed features and reduce the
computation complexity. Third, we alternately use different upsampling methods
in the upsampling stage to reduce the high computation complexity and still
remain satisfactory performance. Fourth, we use the ensemble of 10 models of
different iteration to improve the robustness of model and reduce the noise
introduced by each individual model. Our experimental results show the superior
performance of RFB-ESRGAN. According to the preliminary results of NTIRE 2020
Perceptual Extreme Super-Resolution Challenge, our solution ranks first among
all the participants.
Related papers
- Efficient Model Agnostic Approach for Implicit Neural Representation
Based Arbitrary-Scale Image Super-Resolution [5.704360536038803]
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks.
Traditional networks are limited to upscaling images to a fixed scale, leading to the utilization of implicit neural functions for generating arbitrarily scaled images.
We introduce a novel and efficient framework, the Mixture of Experts Implicit Super-Resolution (MoEISR), which enables super-resolution at arbitrary scales.
arXiv Detail & Related papers (2023-11-20T05:34:36Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - ShuffleMixer: An Efficient ConvNet for Image Super-Resolution [88.86376017828773]
We propose ShuffleMixer, for lightweight image super-resolution that explores large convolution and channel split-shuffle operation.
Specifically, we develop a large depth-wise convolution and two projection layers based on channel splitting and shuffling as the basic component to mix features efficiently.
Experimental results demonstrate that the proposed ShuffleMixer is about 6x smaller than the state-of-the-art methods in terms of model parameters and FLOPs.
arXiv Detail & Related papers (2022-05-30T15:26:52Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Infrared Image Super-Resolution via Heterogeneous Convolutional WGAN [4.6667021835430145]
We present a framework that employs heterogeneous kernel-based super-resolution Wasserstein GAN (HetSRWGAN) for IR image super-resolution.
HetSRWGAN achieves consistently better performance in both qualitative and quantitative evaluations.
arXiv Detail & Related papers (2021-09-02T14:01:05Z) - Multi-Attention Generative Adversarial Network for Remote Sensing Image
Super-Resolution [17.04588012373861]
Image super-resolution (SR) methods can generate remote sensing images with high spatial resolution without increasing the cost.
We propose a network based on the generative adversarial network (GAN) to generate high resolution remote sensing images.
arXiv Detail & Related papers (2021-07-14T08:06:19Z) - Discrete Cosine Transform Network for Guided Depth Map Super-Resolution [19.86463937632802]
The goal is to use high-resolution (HR) RGB images to provide extra information on edges and object contours, so that low-resolution depth maps can be upsampled to HR ones.
We propose an advanced Discrete Cosine Transform Network (DCTNet), which is composed of four components.
We show that our method can generate accurate and HR depth maps, surpassing state-of-the-art methods.
arXiv Detail & Related papers (2021-04-14T17:01:03Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.