Residual Feature Distillation Network for Lightweight Image
Super-Resolution
- URL: http://arxiv.org/abs/2009.11551v1
- Date: Thu, 24 Sep 2020 08:46:40 GMT
- Title: Residual Feature Distillation Network for Lightweight Image
Super-Resolution
- Authors: Jie Liu, Jie Tang, Gangshan Wu
- Abstract summary: We propose a lightweight and accurate SISR model called residual feature distillation network (RFDN)
RFDN uses multiple feature distillation connections to learn more discriminative feature representations.
We also propose a shallow residual block (SRB) as the main building block of RFDN so that the network can benefit most from residual learning.
- Score: 40.52635571871426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in single image super-resolution (SISR) explored the power of
convolutional neural network (CNN) to achieve a better performance. Despite the
great success of CNN-based methods, it is not easy to apply these methods to
edge devices due to the requirement of heavy computation. To solve this
problem, various fast and lightweight CNN models have been proposed. The
information distillation network is one of the state-of-the-art methods, which
adopts the channel splitting operation to extract distilled features. However,
it is not clear enough how this operation helps in the design of efficient SISR
models. In this paper, we propose the feature distillation connection (FDC)
that is functionally equivalent to the channel splitting operation while being
more lightweight and flexible. Thanks to FDC, we can rethink the information
multi-distillation network (IMDN) and propose a lightweight and accurate SISR
model called residual feature distillation network (RFDN). RFDN uses multiple
feature distillation connections to learn more discriminative feature
representations. We also propose a shallow residual block (SRB) as the main
building block of RFDN so that the network can benefit most from residual
learning while still being lightweight enough. Extensive experimental results
show that the proposed RFDN achieve a better trade-off against the
state-of-the-art methods in terms of performance and model complexity.
Moreover, we propose an enhanced RFDN (E-RFDN) and won the first place in the
AIM 2020 efficient super-resolution challenge. Code will be available at
https://github.com/njulj/RFDN.
Related papers
- Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - IMDeception: Grouped Information Distilling Super-Resolution Network [7.6146285961466]
Single-Image-Super-Resolution (SISR) is a classical computer vision problem that has benefited from the recent advancements in deep learning methods.
In this work, we propose the Global Progressive Refinement Module (GPRM) as a less parameter-demanding alternative to the IIC module for feature aggregation.
We also propose Grouped Information Distilling Blocks (GIDB) to further decrease the number of parameters and floating point operations persecond (FLOPS)
Experiments reveal that the proposed network performs on par with state-of-the-art models despite having a limited number of parameters and FLOPS
arXiv Detail & Related papers (2022-04-25T06:43:45Z) - Image Superresolution using Scale-Recurrent Dense Network [30.75380029218373]
Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-28T09:18:43Z) - Local-Selective Feature Distillation for Single Image Super-Resolution [42.83228585332463]
We propose a novel feature distillation (FD) method which is suitable for single image super-resolution (SISR)
We show the limitations of the existing FitNet-based FD method that it suffers in the SISR task, and propose to modify the existing FD algorithm to focus on local feature information.
We call our method local-selective feature distillation (LSFD) and verify that our method outperforms conventional FD methods in SISR problems.
arXiv Detail & Related papers (2021-11-22T05:05:37Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Multi-wavelet residual dense convolutional neural network for image
denoising [2.500475462213752]
We use the short-term residual learning method to improve the performance and robustness of networks for image denoising tasks.
Here, we choose a multi-wavelet convolutional neural network (MWCNN) as the backbone, and insert residual dense blocks (RDBs) in its each layer.
Compared with other RDB-based networks, it can extract more features of the object from adjacent layers, preserve the large RF, and boost the computing efficiency.
arXiv Detail & Related papers (2020-02-19T17:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.