Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning
- URL: http://arxiv.org/abs/2011.06773v1
- Date: Fri, 13 Nov 2020 06:01:46 GMT
- Title: Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning
- Authors: Xuehui Wang, Qing Wang, Yuzhi Zhao, Junchi Yan, Lei Fan, Long Chen
- Abstract summary: We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
- Score: 73.75457731689858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite convolutional network-based methods have boosted the performance of
single image super-resolution (SISR), the huge computation costs restrict their
practical applicability. In this paper, we develop a computation efficient yet
accurate network based on the proposed attentive auxiliary features (A$^2$F)
for SISR. Firstly, to explore the features from the bottom layers, the
auxiliary feature from all the previous layers are projected into a common
space. Then, to better utilize these projected auxiliary features and filter
the redundant information, the channel attention is employed to select the most
important common feature based on current layer feature. We incorporate these
two modules into a block and implement it with a lightweight network.
Experimental results on large-scale dataset demonstrate the effectiveness of
the proposed model against the state-of-the-art (SOTA) SR methods. Notably,
when parameters are less than 320k, A$^2$F outperforms SOTA methods for all
scales, which proves its ability to better utilize the auxiliary features.
Codes are available at https://github.com/wxxxxxxh/A2F-SR.
Related papers
- HASN: Hybrid Attention Separable Network for Efficient Image Super-resolution [5.110892180215454]
lightweight methods for single image super-resolution achieved impressive performance due to limited hardware resources.
We find that using residual connections after each block increases the model's storage and computational cost.
We use depthwise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules.
arXiv Detail & Related papers (2024-10-13T14:00:21Z) - Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - Residual Local Feature Network for Efficient Super-Resolution [20.62809970985125]
In this work, we propose a novel Residual Local Feature Network (RLFN)
The main idea is using three convolutional layers for residual local feature learning to simplify feature aggregation.
In addition, we won the first place in the runtime track of the NTIRE 2022 efficient super-resolution challenge.
arXiv Detail & Related papers (2022-05-16T08:46:34Z) - Efficient Person Search: An Anchor-Free Approach [86.45858994806471]
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images.
To achieve this goal, state-of-the-art models typically add a re-id branch upon two-stage detectors like Faster R-CNN.
In this work, we present an anchor-free approach to efficiently tackling this challenging task, by introducing the following dedicated designs.
arXiv Detail & Related papers (2021-09-01T07:01:33Z) - CondenseNet V2: Sparse Feature Reactivation for Deep Networks [87.38447745642479]
Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency.
We propose an alternative approach named sparse feature reactivation (SFR), aiming at actively increasing the utility of features for reusing.
Our experiments show that the proposed models achieve promising performance on image classification (ImageNet and CIFAR) and object detection (MS COCO) in terms of both theoretical efficiency and practical speed.
arXiv Detail & Related papers (2021-04-09T14:12:43Z) - GhostSR: Learning Ghost Features for Efficient Image Super-Resolution [49.393251361038025]
Single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs.
We propose to use shift operation to generate the redundant features (i.e., Ghost features) of SISR models.
We show that both the non-compact and lightweight SISR models embedded in our proposed module can achieve comparable performance to that of their baselines.
arXiv Detail & Related papers (2021-01-21T10:09:47Z) - MPRNet: Multi-Path Residual Network for Lightweight Image Super
Resolution [2.3576437999036473]
A novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR.
The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model.
arXiv Detail & Related papers (2020-11-09T17:11:15Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.