Efficient Star Distillation Attention Network for Lightweight Image Super-Resolution
- URL: http://arxiv.org/abs/2506.12475v1
- Date: Sat, 14 Jun 2025 12:24:15 GMT
- Title: Efficient Star Distillation Attention Network for Lightweight Image Super-Resolution
- Authors: Fangwei Hao, Ji Du, Desheng Kong, Jiesheng Wu, Jing Xu, Ping Li,
- Abstract summary: We propose a Star Distillation Module (SDM) to enhance discriminative representation learning via information distillation in the HDNL feature spaces.<n>Besides, we present a Multi-shape Multi-scale Large Kernel Attention (MM-LKA) module to learn representative long-range dependencies.<n>Our SDAN with low model complexity yields superior performance quantitatively and visually.
- Score: 14.729612888730404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the performance of lightweight Single-Image Super-Resolution (SISR) has been improved significantly with the application of Convolutional Neural Networks (CNNs) and Large Kernel Attention (LKA). However, existing information distillation modules for lightweight SISR struggle to map inputs into High-Dimensional Non-Linear (HDNL) feature spaces, limiting their representation learning. And their LKA modules possess restricted ability to capture the multi-shape multi-scale information for long-range dependencies while encountering a quadratic increase in the computational burden with increasing convolutional kernel size of its depth-wise convolutional layer. To address these issues, we firstly propose a Star Distillation Module (SDM) to enhance the discriminative representation learning via information distillation in the HDNL feature spaces. Besides, we present a Multi-shape Multi-scale Large Kernel Attention (MM-LKA) module to learn representative long-range dependencies while incurring low computational and memory footprints, leading to improving the performance of CNN-based self-attention significantly. Integrating SDM and MM-LKA, we develop a Residual Star Distillation Attention Module (RSDAM) and take it as the building block of the proposed efficient Star Distillation Attention Network (SDAN) which possesses high reconstruction efficiency to recover a higher-quality image from the corresponding low-resolution (LR) counterpart. When compared with other lightweight state-of-the-art SISR methods, extensive experiments show that our SDAN with low model complexity yields superior performance quantitatively and visually.
Related papers
- Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution [19.22142805041799]
Convolutional neural networks (CNNs) have been widely used in efficient image super-resolution.<n>We propose Distillation-Supervised Convolutional Low-Rank Adaptation (DSCLoRA), which improves model performance without increasing architectural complexity or inference costs.
arXiv Detail & Related papers (2025-04-15T15:12:57Z) - Large coordinate kernel attention network for lightweight image super-resolution [5.66935513638074]
We propose the multi-scale blueprint separable convolutions (MBSConv) as highly efficient building block with multi-scale receptive field.
We also propose a large coordinate kernel attention (LCKA) module which decomposes the 2D convolutional kernels of the depth-wise convolutional layers in LKA into horizontal and vertical 1-D kernels.
arXiv Detail & Related papers (2024-05-15T14:03:38Z) - Low-Resolution Self-Attention for Semantic Segmentation [93.30597515880079]
We introduce the Low-Resolution Self-Attention (LRSA) mechanism to capture global context at a significantly reduced computational cost.<n>Our approach involves computing self-attention in a fixed low-resolution space regardless of the input image's resolution.<n>We demonstrate the effectiveness of our LRSA approach by building the LRFormer, a vision transformer with an encoder-decoder structure.
arXiv Detail & Related papers (2023-10-08T06:10:09Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Implicit Diffusion Models for Continuous Super-Resolution [65.45848137914592]
This paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution.
IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework.
The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output.
arXiv Detail & Related papers (2023-03-29T07:02:20Z) - Incorporating Transformer Designs into Convolutions for Lightweight
Image Super-Resolution [46.32359056424278]
Large convolutional kernels have become popular in designing convolutional neural networks.
The increase in kernel size also leads to a quadratic growth in the number of parameters, resulting in heavy computation and memory requirements.
We propose a neighborhood attention (NA) module that upgrades the standard convolution with a self-attention mechanism.
Building upon the NA module, we propose a lightweight single image super-resolution (SISR) network named TCSR.
arXiv Detail & Related papers (2023-03-25T01:32:18Z) - Lightweight Image Super-Resolution with Multi-scale Feature Interaction
Network [15.846394239848959]
We present a lightweight multi-scale feature interaction network (MSFIN)
For lightweight SISR, MSFIN expands the receptive field and adequately exploits the informative features of the low-resolution observed images.
Our proposed MSFIN can achieve comparable performance against the state-of-the-arts with a more lightweight model.
arXiv Detail & Related papers (2021-03-24T07:25:21Z) - MPRNet: Multi-Path Residual Network for Lightweight Image Super
Resolution [2.3576437999036473]
A novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR.
The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model.
arXiv Detail & Related papers (2020-11-09T17:11:15Z) - Accurate and Lightweight Image Super-Resolution with Model-Guided Deep
Unfolding Network [63.69237156340457]
We present and advocate an explainable approach toward SISR named model-guided deep unfolding network (MoG-DUN)
MoG-DUN is accurate (producing fewer aliasing artifacts), computationally efficient (with reduced model parameters), and versatile (capable of handling multiple degradations)
The superiority of the proposed MoG-DUN method to existing state-of-theart image methods including RCAN, SRDNF, and SRFBN is substantiated by extensive experiments on several popular datasets and various degradation scenarios.
arXiv Detail & Related papers (2020-09-14T08:23:37Z) - Lightweight image super-resolution with enhanced CNN [82.36883027158308]
Deep convolutional neural networks (CNNs) with strong expressive ability have achieved impressive performances on single image super-resolution (SISR)
We propose a lightweight enhanced SR CNN (LESRCNN) with three successive sub-blocks, an information extraction and enhancement block (IEEB), a reconstruction block (RB) and an information refinement block (IRB)
IEEB extracts hierarchical low-resolution (LR) features and aggregates the obtained features step-by-step to increase the memory ability of the shallow layers on deep layers for SISR.
RB converts low-frequency features into high-frequency features by fusing global
arXiv Detail & Related papers (2020-07-08T18:03:40Z) - Image Super-Resolution with Cross-Scale Non-Local Attention and
Exhaustive Self-Exemplars Mining [66.82470461139376]
We propose the first Cross-Scale Non-Local (CS-NL) attention module with integration into a recurrent neural network.
By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution image.
arXiv Detail & Related papers (2020-06-02T07:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.