High-Similarity-Pass Attention for Single Image Super-Resolution
- URL: http://arxiv.org/abs/2305.15768v1
- Date: Thu, 25 May 2023 06:24:14 GMT
- Title: High-Similarity-Pass Attention for Single Image Super-Resolution
- Authors: Jian-Nan Su, Min Gan, Guang-Yong Chen, Wenzhong Guo, C. L. Philip Chen
- Abstract summary: Recent developments in the field of non-local attention (NLA) have led to a renewed interest in self-similarity-based single image super-resolution (SISR)
We introduce a concise yet effective soft thresholding operation to obtain high-similarity-pass attention (HSPA)
To demonstrate the effectiveness of the HSPA, we constructed a deep high-similarity-pass attention network (HSPAN)
- Score: 81.56822938033118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in the field of non-local attention (NLA) have led to a
renewed interest in self-similarity-based single image super-resolution (SISR).
Researchers usually used the NLA to explore non-local self-similarity (NSS) in
SISR and achieve satisfactory reconstruction results. However, a surprising
phenomenon that the reconstruction performance of the standard NLA is similar
to the NLA with randomly selected regions stimulated our interest to revisit
NLA. In this paper, we first analyzed the attention map of the standard NLA
from different perspectives and discovered that the resulting probability
distribution always has full support for every local feature, which implies a
statistical waste of assigning values to irrelevant non-local features,
especially for SISR which needs to model long-range dependence with a large
number of redundant non-local features. Based on these findings, we introduced
a concise yet effective soft thresholding operation to obtain
high-similarity-pass attention (HSPA), which is beneficial for generating a
more compact and interpretable distribution. Furthermore, we derived some key
properties of the soft thresholding operation that enable training our HSPA in
an end-to-end manner. The HSPA can be integrated into existing deep SISR models
as an efficient general building block. In addition, to demonstrate the
effectiveness of the HSPA, we constructed a deep high-similarity-pass attention
network (HSPAN) by integrating a few HSPAs in a simple backbone. Extensive
experimental results demonstrate that HSPAN outperforms state-of-the-art
approaches on both quantitative and qualitative evaluations.
Related papers
- Efficient Learnable Collaborative Attention for Single Image Super-Resolution [18.955369476815136]
Non-Local Attention (NLA) is a powerful technique for capturing long-range feature correlations in deep single image super-resolution (SR)
We propose a novel Learnable Collaborative Attention (LCoA) that introduces inductive bias into non-local modeling.
Our LCoA can reduce the non-local modeling time by about 83% in the inference stage.
arXiv Detail & Related papers (2024-04-07T11:25:04Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - SLLEN: Semantic-aware Low-light Image Enhancement Network [92.80325772199876]
We develop a semantic-aware LLE network (SSLEN) composed of a LLE main-network (LLEmN) and a SS auxiliary-network (SSaN)
Unlike currently available approaches, the proposed SLLEN is able to fully lever the semantic information, e.g., IEF, HSF, and SS dataset, to assist LLE.
Comparisons between the proposed SLLEN and other state-of-the-art techniques demonstrate the superiority of SLLEN with respect to LLE quality.
arXiv Detail & Related papers (2022-11-21T15:29:38Z) - Efficient Non-Local Contrastive Attention for Image Super-Resolution [48.093500219958834]
Non-Local Attention (NLA) brings significant improvement for Single Image Super-Resolution (SISR) by leveraging intrinsic feature correlation in natural images.
We propose a novel Efficient Non-Local Contrastive Attention (ENLCA) to perform long-range visual modeling and leverage more relevant non-local features.
arXiv Detail & Related papers (2022-01-11T05:59:09Z) - SOSP: Efficiently Capturing Global Correlations by Second-Order
Structured Pruning [8.344476599818828]
We devise two novel saliency-based methods for second-order structured pruning (SOSP)
SOSP-H employs an innovative second-order approximation, which enables saliency evaluations by fast Hessian-vector products.
We show that our algorithms allow to systematically reveal architectural bottlenecks, which we then remove to further increase the accuracy of the networks.
arXiv Detail & Related papers (2021-10-19T13:53:28Z) - Image Super-Resolution with Cross-Scale Non-Local Attention and
Exhaustive Self-Exemplars Mining [66.82470461139376]
We propose the first Cross-Scale Non-Local (CS-NL) attention module with integration into a recurrent neural network.
By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution image.
arXiv Detail & Related papers (2020-06-02T07:08:58Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z) - Hybrid Multiple Attention Network for Semantic Segmentation in Aerial
Images [24.35779077001839]
We propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations.
We introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism.
arXiv Detail & Related papers (2020-01-09T07:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.