ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
- URL: http://arxiv.org/abs/2205.15175v1
- Date: Mon, 30 May 2022 15:26:52 GMT
- Title: ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
- Authors: Long Sun, Jinshan Pan, Jinhui Tang
- Abstract summary: We propose ShuffleMixer, for lightweight image super-resolution that explores large convolution and channel split-shuffle operation.
Specifically, we develop a large depth-wise convolution and two projection layers based on channel splitting and shuffling as the basic component to mix features efficiently.
Experimental results demonstrate that the proposed ShuffleMixer is about 6x smaller than the state-of-the-art methods in terms of model parameters and FLOPs.
- Score: 88.86376017828773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lightweight and efficiency are critical drivers for the practical application
of image super-resolution (SR) algorithms. We propose a simple and effective
approach, ShuffleMixer, for lightweight image super-resolution that explores
large convolution and channel split-shuffle operation. In contrast to previous
SR models that simply stack multiple small kernel convolutions or complex
operators to learn representations, we explore a large kernel ConvNet for
mobile-friendly SR design. Specifically, we develop a large depth-wise
convolution and two projection layers based on channel splitting and shuffling
as the basic component to mix features efficiently. Since the contexts of
natural images are strongly locally correlated, using large depth-wise
convolutions only is insufficient to reconstruct fine details. To overcome this
problem while maintaining the efficiency of the proposed module, we introduce
Fused-MBConvs into the proposed network to model the local connectivity of
different features. Experimental results demonstrate that the proposed
ShuffleMixer is about 6x smaller than the state-of-the-art methods in terms of
model parameters and FLOPs while achieving competitive performance. In NTIRE
2022, our primary method won the model complexity track of the Efficient
Super-Resolution Challenge [23]. The code is available at
https://github.com/sunny2109/MobileSR-NTIRE2022.
Related papers
- Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach [58.57026686186709]
We introduce the Convolutional Transformer layer (ConvFormer) and propose a ConvFormer-based Super-Resolution network (CFSR)
CFSR inherits the advantages of both convolution-based and transformer-based approaches.
Experiments demonstrate that CFSR strikes an optimal balance between computational cost and performance.
arXiv Detail & Related papers (2024-01-11T03:08:00Z) - WaveMixSR: A Resource-efficient Neural Network for Image
Super-resolution [2.0477182014909205]
We propose a new neural network -- WaveMixSR -- for image super-resolution based on WaveMix architecture.
WaveMixSR achieves competitive performance in all datasets and reaches state-of-the-art performance in the BSD100 dataset on multiple super-resolution tasks.
arXiv Detail & Related papers (2023-07-01T21:25:03Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - Lightweight Bimodal Network for Single-Image Super-Resolution via
Symmetric CNN and Recursive Transformer [27.51790638626891]
Single-image super-resolution (SISR) has achieved significant breakthroughs with the development of deep learning.
To solve this issue, we propose a Lightweight Bimodal Network (LBNet) for SISR.
Specifically, an effective Symmetric CNN is designed for local feature extraction and coarse image reconstruction.
arXiv Detail & Related papers (2022-04-28T04:43:22Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.