GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
- URL: http://arxiv.org/abs/2101.08525v1
- Date: Thu, 21 Jan 2021 10:09:47 GMT
- Title: GhostSR: Learning Ghost Features for Efficient Image Super-Resolution
- Authors: Ying Nie, Kai Han, Zhenhua Liu, An Xiao, Yiping Deng, Chunjing Xu,
Yunhe Wang
- Abstract summary: Single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs.
We propose to use shift operation to generate the redundant features (i.e., Ghost features) of SISR models.
We show that both the non-compact and lightweight SISR models embedded in our proposed module can achieve comparable performance to that of their baselines.
- Score: 49.393251361038025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern single image super-resolution (SISR) system based on convolutional
neural networks (CNNs) achieves fancy performance while requires huge
computational costs. The problem on feature redundancy is well studied in
visual recognition task, but rarely discussed in SISR. Based on the observation
that many features in SISR models are also similar to each other, we propose to
use shift operation to generate the redundant features (i.e., Ghost features).
Compared with depth-wise convolution which is not friendly to GPUs or NPUs,
shift operation can bring practical inference acceleration for CNNs on common
hardware. We analyze the benefits of shift operation for SISR and make the
shift orientation learnable based on Gumbel-Softmax trick. For a given
pre-trained model, we first cluster all filters in each convolutional layer to
identify the intrinsic ones for generating intrinsic features. Ghost features
will be derived by moving these intrinsic features along a specific
orientation. The complete output features are constructed by concatenating the
intrinsic and ghost features together. Extensive experiments on several
benchmark models and datasets demonstrate that both the non-compact and
lightweight SISR models embedded in our proposed module can achieve comparable
performance to that of their baselines with large reduction of parameters,
FLOPs and GPU latency. For instance, we reduce the parameters by 47%, FLOPs by
46% and GPU latency by 41% of EDSR x2 network without significant performance
degradation.
Related papers
- Single image super-resolution based on trainable feature matching attention network [0.0]
Convolutional Neural Networks (CNNs) have been widely employed for image Super-Resolution (SR)
We introduce Trainable Feature Matching (TFM) to amalgamate explicit feature learning into CNNs, augmenting their representation capabilities.
We also propose a streamlined variant called Same-size-divided Region-level Non-Local (SRNL) to alleviate the computational demands of non-local operations.
arXiv Detail & Related papers (2024-05-29T08:31:54Z) - DVMSR: Distillated Vision Mamba for Efficient Super-Resolution [7.551130027327461]
We propose DVMSR, a novel lightweight Image SR network that incorporates Vision Mamba and a distillation strategy.
Our proposed DVMSR can outperform state-of-the-art efficient SR methods in terms of model parameters.
arXiv Detail & Related papers (2024-05-05T17:34:38Z) - Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - Incorporating Transformer Designs into Convolutions for Lightweight
Image Super-Resolution [46.32359056424278]
Large convolutional kernels have become popular in designing convolutional neural networks.
The increase in kernel size also leads to a quadratic growth in the number of parameters, resulting in heavy computation and memory requirements.
We propose a neighborhood attention (NA) module that upgrades the standard convolution with a self-attention mechanism.
Building upon the NA module, we propose a lightweight single image super-resolution (SISR) network named TCSR.
arXiv Detail & Related papers (2023-03-25T01:32:18Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Ghost-dil-NetVLAD: A Lightweight Neural Network for Visual Place Recognition [3.6249801498927923]
We propose a lightweight weakly supervised end-to-end neural network consisting of a front-ended perception model called GhostCNN and a learnable VLAD layer as a back-end.
To enhance our proposed lightweight model further, we add dilated convolutions to the Ghost module to get features containing more spatial semantic information, improving accuracy.
arXiv Detail & Related papers (2021-12-22T06:05:02Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.