Fast and Memory-Efficient Network Towards Efficient Image
Super-Resolution
- URL: http://arxiv.org/abs/2204.08397v1
- Date: Mon, 18 Apr 2022 16:49:20 GMT
- Title: Fast and Memory-Efficient Network Towards Efficient Image
Super-Resolution
- Authors: Zongcai Du, Ding Liu, Jie Liu, Jie Tang, Gangshan Wu, Lean Fu
- Abstract summary: We build a memory-efficient image super-resolution network (FMEN) for resource-constrained devices.
FMEN runs 33% faster and reduces 74% memory consumption compared with the state-of-the-art EISR model: E-RFDN.
FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
- Score: 44.909233016062906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Runtime and memory consumption are two important aspects for efficient image
super-resolution (EISR) models to be deployed on resource-constrained devices.
Recent advances in EISR exploit distillation and aggregation strategies with
plenty of channel split and concatenation operations to make full use of
limited hierarchical features. In contrast, sequential network operations avoid
frequently accessing preceding states and extra nodes, and thus are beneficial
to reducing the memory consumption and runtime overhead. Following this idea,
we design our lightweight network backbone by mainly stacking multiple highly
optimized convolution and activation layers and decreasing the usage of feature
fusion. We propose a novel sequential attention branch, where every pixel is
assigned an important factor according to local and global contexts, to enhance
high-frequency details. In addition, we tailor the residual block for EISR and
propose an enhanced residual block (ERB) to further accelerate the network
inference. Finally, combining all the above techniques, we construct a fast and
memory-efficient network (FMEN) and its small version FMEN-S, which runs 33%
faster and reduces 74% memory consumption compared with the state-of-the-art
EISR model: E-RFDN, the champion in AIM 2020 efficient super-resolution
challenge. Besides, FMEN-S achieves the lowest memory consumption and the
second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
Code is available at https://github.com/NJU-Jet/FMEN.
Related papers
- GRAN: Ghost Residual Attention Network for Single Image Super Resolution [44.4178326950426]
This paper introduces Ghost Residual Attention Block (GRAB) groups to overcome the drawbacks of the standard convolutional operation.
Ghost Module can reveal information underlying intrinsic features by employing linear operations to replace the standard convolutions.
Experiments conducted on the benchmark datasets demonstrate the superior performance of our method in both qualitative and quantitative.
arXiv Detail & Related papers (2023-02-28T13:26:24Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Efficient Image Super-Resolution using Vast-Receptive-Field Attention [49.87316814164699]
The attention mechanism plays a pivotal role in designing advanced super-resolution (SR) networks.
In this work, we design an efficient SR network by improving the attention mechanism.
We propose VapSR, the VAst-receptive-field Pixel attention network.
arXiv Detail & Related papers (2022-10-12T07:01:00Z) - Residual Local Feature Network for Efficient Super-Resolution [20.62809970985125]
In this work, we propose a novel Residual Local Feature Network (RLFN)
The main idea is using three convolutional layers for residual local feature learning to simplify feature aggregation.
In addition, we won the first place in the runtime track of the NTIRE 2022 efficient super-resolution challenge.
arXiv Detail & Related papers (2022-05-16T08:46:34Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Towards Memory-Efficient Neural Networks via Multi-Level in situ
Generation [10.563649948220371]
Deep neural networks (DNN) have shown superior performance in a variety of tasks.
As they rapidly evolve, their escalating computation and memory demands make it challenging to deploy them on resource-constrained edge devices.
We propose a general and unified framework to trade expensive memory transactions with ultra-fast on-chip computations.
arXiv Detail & Related papers (2021-08-25T18:50:24Z) - CondenseNet V2: Sparse Feature Reactivation for Deep Networks [87.38447745642479]
Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency.
We propose an alternative approach named sparse feature reactivation (SFR), aiming at actively increasing the utility of features for reusing.
Our experiments show that the proposed models achieve promising performance on image classification (ImageNet and CIFAR) and object detection (MS COCO) in terms of both theoretical efficiency and practical speed.
arXiv Detail & Related papers (2021-04-09T14:12:43Z) - Hierarchical Residual Attention Network for Single Image
Super-Resolution [2.0571256241341924]
This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation.
Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.
arXiv Detail & Related papers (2020-12-08T17:24:28Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - Improving Memory Utilization in Convolutional Neural Network
Accelerators [16.340620299847384]
We propose a mapping method that allows activation layers to overlap and thus utilize the memory more efficiently.
Experiments with various real-world object detector networks show that the proposed mapping technique can decrease the activations memory by up to 32.9%.
For higher resolution de-noising networks, we achieve activation memory savings of 48.8%.
arXiv Detail & Related papers (2020-07-20T09:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.