GRAN: Ghost Residual Attention Network for Single Image Super Resolution
- URL: http://arxiv.org/abs/2302.14557v2
- Date: Thu, 2 Mar 2023 02:01:19 GMT
- Title: GRAN: Ghost Residual Attention Network for Single Image Super Resolution
- Authors: Axi Niu, Pei Wang, Yu Zhu, Jinqiu Sun, Qingsen Yan, Yanning Zhang
- Abstract summary: This paper introduces Ghost Residual Attention Block (GRAB) groups to overcome the drawbacks of the standard convolutional operation.
Ghost Module can reveal information underlying intrinsic features by employing linear operations to replace the standard convolutions.
Experiments conducted on the benchmark datasets demonstrate the superior performance of our method in both qualitative and quantitative.
- Score: 44.4178326950426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, many works have designed wider and deeper networks to achieve
higher image super-resolution performance. Despite their outstanding
performance, they still suffer from high computational resources, preventing
them from directly applying to embedded devices. To reduce the computation
resources and maintain performance, we propose a novel Ghost Residual Attention
Network (GRAN) for efficient super-resolution. This paper introduces Ghost
Residual Attention Block (GRAB) groups to overcome the drawbacks of the
standard convolutional operation, i.e., redundancy of the intermediate feature.
GRAB consists of the Ghost Module and Channel and Spatial Attention Module
(CSAM) to alleviate the generation of redundant features. Specifically, Ghost
Module can reveal information underlying intrinsic features by employing linear
operations to replace the standard convolutions. Reducing redundant features by
the Ghost Module, our model decreases memory and computing resource
requirements in the network. The CSAM pays more comprehensive attention to
where and what the feature extraction is, which is critical to recovering the
image details. Experiments conducted on the benchmark datasets demonstrate the
superior performance of our method in both qualitative and quantitative.
Compared to the baseline models, we achieve higher performance with lower
computational resources, whose parameters and FLOPs have decreased by more than
ten times.
Related papers
- HASN: Hybrid Attention Separable Network for Efficient Image Super-resolution [5.110892180215454]
lightweight methods for single image super-resolution achieved impressive performance due to limited hardware resources.
We find that using residual connections after each block increases the model's storage and computational cost.
We use depthwise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules.
arXiv Detail & Related papers (2024-10-13T14:00:21Z) - DVMSR: Distillated Vision Mamba for Efficient Super-Resolution [7.551130027327461]
We propose DVMSR, a novel lightweight Image SR network that incorporates Vision Mamba and a distillation strategy.
Our proposed DVMSR can outperform state-of-the-art efficient SR methods in terms of model parameters.
arXiv Detail & Related papers (2024-05-05T17:34:38Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - Efficient Image Super-Resolution with Feature Interaction Weighted Hybrid Network [101.53907377000445]
Lightweight image super-resolution aims to reconstruct high-resolution images from low-resolution images using low computational costs.
Existing methods result in the loss of middle-layer features due to activation functions.
We propose a Feature Interaction Weighted Hybrid Network (FIWHN) to minimize the impact of intermediate feature loss on reconstruction quality.
arXiv Detail & Related papers (2022-12-29T05:57:29Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Fast and Memory-Efficient Network Towards Efficient Image
Super-Resolution [44.909233016062906]
We build a memory-efficient image super-resolution network (FMEN) for resource-constrained devices.
FMEN runs 33% faster and reduces 74% memory consumption compared with the state-of-the-art EISR model: E-RFDN.
FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
arXiv Detail & Related papers (2022-04-18T16:49:20Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - GhostSR: Learning Ghost Features for Efficient Image Super-Resolution [49.393251361038025]
Single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs.
We propose to use shift operation to generate the redundant features (i.e., Ghost features) of SISR models.
We show that both the non-compact and lightweight SISR models embedded in our proposed module can achieve comparable performance to that of their baselines.
arXiv Detail & Related papers (2021-01-21T10:09:47Z) - Hierarchical Residual Attention Network for Single Image
Super-Resolution [2.0571256241341924]
This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation.
Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.
arXiv Detail & Related papers (2020-12-08T17:24:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.