Feature Distillation Interaction Weighting Network for Lightweight Image
Super-Resolution
- URL: http://arxiv.org/abs/2112.08655v1
- Date: Thu, 16 Dec 2021 06:20:35 GMT
- Title: Feature Distillation Interaction Weighting Network for Lightweight Image
Super-Resolution
- Authors: Guangwei Gao, Wenjie Li, Juncheng Li, Fei Wu, Huimin Lu, Yi Yu
- Abstract summary: We propose a lightweight yet efficient Feature Distillation Interaction Weighted Network (FDIWN)
FDIWN is superior to other models to strike a good balance between model performance and efficiency.
- Score: 25.50790871331823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks based single-image super-resolution (SISR) has
made great progress in recent years. However, it is difficult to apply these
methods to real-world scenarios due to the computational and memory cost.
Meanwhile, how to take full advantage of the intermediate features under the
constraints of limited parameters and calculations is also a huge challenge. To
alleviate these issues, we propose a lightweight yet efficient Feature
Distillation Interaction Weighted Network (FDIWN). Specifically, FDIWN utilizes
a series of specially designed Feature Shuffle Weighted Groups (FSWG) as the
backbone, and several novel mutual Wide-residual Distillation Interaction
Blocks (WDIB) form an FSWG. In addition, Wide Identical Residual Weighting
(WIRW) units and Wide Convolutional Residual Weighting (WCRW) units are
introduced into WDIB for better feature distillation. Moreover, a Wide-Residual
Distillation Connection (WRDC) framework and a Self-Calibration Fusion (SCF)
unit are proposed to interact features with different scales more flexibly and
efficiently.Extensive experiments show that our FDIWN is superior to other
models to strike a good balance between model performance and efficiency. The
code is available at https://github.com/IVIPLab/FDIWN.
Related papers
- HASN: Hybrid Attention Separable Network for Efficient Image Super-resolution [5.110892180215454]
lightweight methods for single image super-resolution achieved impressive performance due to limited hardware resources.
We find that using residual connections after each block increases the model's storage and computational cost.
We use depthwise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules.
arXiv Detail & Related papers (2024-10-13T14:00:21Z) - Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach [58.57026686186709]
We introduce the Convolutional Transformer layer (ConvFormer) and propose a ConvFormer-based Super-Resolution network (CFSR)
CFSR inherits the advantages of both convolution-based and transformer-based approaches.
Experiments demonstrate that CFSR strikes an optimal balance between computational cost and performance.
arXiv Detail & Related papers (2024-01-11T03:08:00Z) - Spatially-Adaptive Feature Modulation for Efficient Image
Super-Resolution [90.16462805389943]
We develop a spatially-adaptive feature modulation (SAFM) mechanism upon a vision transformer (ViT)-like block.
Proposed method is $3times$ smaller than state-of-the-art efficient SR methods.
arXiv Detail & Related papers (2023-02-27T14:19:31Z) - Efficient Image Super-Resolution with Feature Interaction Weighted Hybrid Network [101.53907377000445]
Lightweight image super-resolution aims to reconstruct high-resolution images from low-resolution images using low computational costs.
Existing methods result in the loss of middle-layer features due to activation functions.
We propose a Feature Interaction Weighted Hybrid Network (FIWHN) to minimize the impact of intermediate feature loss on reconstruction quality.
arXiv Detail & Related papers (2022-12-29T05:57:29Z) - Bitwidth Heterogeneous Federated Learning with Progressive Weight
Dequantization [58.31288475660333]
We introduce a pragmatic Federated Learning scenario with bitwidth Heterogeneous Federated Learning (BHFL)
BHFL brings in a new challenge, that the aggregation of model parameters with different bitwidths could result in severe performance degeneration.
We propose ProWD framework, which has a trainable weight dequantizer at the central server that progressively reconstructs the low-bitwidth weights into higher bitwidth weights, and finally into full-precision weights.
arXiv Detail & Related papers (2022-02-23T12:07:02Z) - Asymmetric CNN for image super-resolution [102.96131810686231]
Deep convolutional neural networks (CNNs) have been widely applied for low-level vision over the past five years.
We propose an asymmetric CNN (ACNet) comprising an asymmetric block (AB), a mem?ory enhancement block (MEB) and a high-frequency feature enhancement block (HFFEB) for image super-resolution.
Our ACNet can effectively address single image super-resolution (SISR), blind SISR and blind SISR of blind noise problems.
arXiv Detail & Related papers (2021-03-25T07:10:46Z) - Lightweight Image Super-Resolution with Multi-scale Feature Interaction
Network [15.846394239848959]
We present a lightweight multi-scale feature interaction network (MSFIN)
For lightweight SISR, MSFIN expands the receptive field and adequately exploits the informative features of the low-resolution observed images.
Our proposed MSFIN can achieve comparable performance against the state-of-the-arts with a more lightweight model.
arXiv Detail & Related papers (2021-03-24T07:25:21Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - Residual Feature Distillation Network for Lightweight Image
Super-Resolution [40.52635571871426]
We propose a lightweight and accurate SISR model called residual feature distillation network (RFDN)
RFDN uses multiple feature distillation connections to learn more discriminative feature representations.
We also propose a shallow residual block (SRB) as the main building block of RFDN so that the network can benefit most from residual learning.
arXiv Detail & Related papers (2020-09-24T08:46:40Z) - Multi-Attention Based Ultra Lightweight Image Super-Resolution [9.819866781885446]
We propose a Multi-Attentive Feature Fusion Super-Resolution Network (MAFFSRN)
MAFFSRN consists of proposed feature fusion groups (FFGs) that serve as a feature extraction block.
We participated in AIM 2020 efficient SR challenge with our MAFFSRN model and won 1st, 3rd, and 4th places in memory usage, floating-point operations (FLOPs) and number of parameters, respectively.
arXiv Detail & Related papers (2020-08-29T05:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.