Single Image Super-Resolution via a Holistic Attention Network
- URL: http://arxiv.org/abs/2008.08767v1
- Date: Thu, 20 Aug 2020 04:13:15 GMT
- Title: Single Image Super-Resolution via a Holistic Attention Network
- Authors: Ben Niu, Weilei Wen, Wenqi Ren, Xiangde Zhang, Lianping Yang, Shuzhen
Wang, Kaihao Zhang, Xiaochun Cao and Haifeng Shen
- Abstract summary: We propose a new holistic attention network (HAN) to model the holistic interdependencies among layers, channels, and positions.
The proposed HAN adaptively emphasizes hierarchical features by considering correlations among layers.
Experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super-resolution approaches.
- Score: 87.42409213909269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Informative features play a crucial role in the single image super-resolution
task. Channel attention has been demonstrated to be effective for preserving
information-rich features in each layer. However, channel attention treats each
convolution layer as a separate process that misses the correlation among
different layers. To address this problem, we propose a new holistic attention
network (HAN), which consists of a layer attention module (LAM) and a
channel-spatial attention module (CSAM), to model the holistic
interdependencies among layers, channels, and positions. Specifically, the
proposed LAM adaptively emphasizes hierarchical features by considering
correlations among layers. Meanwhile, CSAM learns the confidence at all the
positions of each channel to selectively capture more informative features.
Extensive experiments demonstrate that the proposed HAN performs favorably
against the state-of-the-art single image super-resolution approaches.
Related papers
- SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion [59.96233305733875]
Time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare.
Several methods utilize mechanisms like attention or mixer to address this by capturing channel correlations.
This paper presents an efficient-based model, the Series-cOre Fused Time Series forecaster (SOFTS)
arXiv Detail & Related papers (2024-04-22T14:06:35Z) - Efficient Multi-Scale Attention Module with Cross-Spatial Learning [4.046170185945849]
A novel efficient multi-scale attention (EMA) module is proposed.
We focus on retaining the information on per channel and decreasing the computational overhead.
We conduct extensive ablation studies and experiments on image classification and object detection tasks.
arXiv Detail & Related papers (2023-05-23T00:35:47Z) - CAT: Learning to Collaborate Channel and Spatial Attention from
Multi-Information Fusion [23.72040577828098]
We propose a plug-and-play attention module, which we term "CAT"-activating the Collaboration between spatial and channel Attentions.
Specifically, we represent traits as trainable coefficients (i.e., colla-factors) to adaptively combine contributions of different attention modules.
Our CAT outperforms existing state-of-the-art attention mechanisms in object detection, instance segmentation, and image classification.
arXiv Detail & Related papers (2022-12-13T02:34:10Z) - A Discriminative Channel Diversification Network for Image
Classification [21.049734250642974]
We propose a light-weight and effective attention module, called channel diversification block, to enhance the global context.
Unlike other channel attention mechanisms, the proposed module focuses on the most discriminative features.
Experiments on CIFAR-10, SVHN, and Tiny-ImageNet datasets demonstrate that the proposed module improves the performance of the baseline networks by a margin of 3% on average.
arXiv Detail & Related papers (2021-12-10T23:00:53Z) - Dense Dual-Attention Network for Light Field Image Super-Resolution [13.683743266136014]
Light field (LF) images can be used to improve the performance of image super-resolution (SR)
It is challenging to incorporate distinctive information from different views for LF image SR.
We propose a dense dual-attention network for LF image SR.
arXiv Detail & Related papers (2021-10-23T02:10:47Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Dual Attention GANs for Semantic Image Synthesis [101.36015877815537]
We propose a novel Dual Attention GAN (DAGAN) to synthesize photo-realistic and semantically-consistent images.
We also propose two novel modules, i.e., position-wise Spatial Attention Module (SAM) and scale-wise Channel Attention Module (CAM)
DAGAN achieves remarkably better results than state-of-the-art methods, while using fewer model parameters.
arXiv Detail & Related papers (2020-08-29T17:49:01Z) - Channel Interaction Networks for Fine-Grained Image Categorization [61.095320862647476]
Fine-grained image categorization is challenging due to the subtle inter-class differences.
We propose a channel interaction network (CIN), which models the channel-wise interplay both within an image and across images.
Our model can be trained efficiently in an end-to-end fashion without the need of multi-stage training and testing.
arXiv Detail & Related papers (2020-03-11T11:51:51Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z) - Hybrid Multiple Attention Network for Semantic Segmentation in Aerial
Images [24.35779077001839]
We propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations.
We introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism.
arXiv Detail & Related papers (2020-01-09T07:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.