Pyramidal Dense Attention Networks for Lightweight Image
Super-Resolution
- URL: http://arxiv.org/abs/2106.06996v1
- Date: Sun, 13 Jun 2021 13:49:41 GMT
- Title: Pyramidal Dense Attention Networks for Lightweight Image
Super-Resolution
- Authors: Huapeng Wu, Jie Gui, Jun Zhang, James T. Kwok, Zhihui Wei
- Abstract summary: Deep convolutional neural network methods have achieved an excellent performance in image superresolution.
We propose a pyramidal dense attention network (PDAN) for lightweight image super-resolution.
Our method achieves superior performance in comparison with the state-of-the-art lightweight SR methods.
- Score: 37.58180059860872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep convolutional neural network methods have achieved an
excellent performance in image superresolution (SR), but they can not be easily
applied to embedded devices due to large memory cost. To solve this problem, we
propose a pyramidal dense attention network (PDAN) for lightweight image
super-resolution in this paper. In our method, the proposed pyramidal dense
learning can gradually increase the width of the densely connected layer inside
a pyramidal dense block to extract deep features efficiently. Meanwhile, the
adaptive group convolution that the number of groups grows linearly with dense
convolutional layers is introduced to relieve the parameter explosion. Besides,
we also present a novel joint attention to capture cross-dimension interaction
between the spatial dimensions and channel dimension in an efficient way for
providing rich discriminative feature representations. Extensive experimental
results show that our method achieves superior performance in comparison with
the state-of-the-art lightweight SR methods.
Related papers
- DSR-Diff: Depth Map Super-Resolution with Diffusion Model [38.68563026759223]
We present a novel CDSR paradigm that utilizes a diffusion model within the latent space to generate guidance for depth map super-resolution.
Our proposed method has shown superior performance in extensive experiments when compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-11-16T14:18:10Z) - Guided Depth Super-Resolution by Deep Anisotropic Diffusion [18.445649181582823]
We propose a novel approach which combines guided anisotropic diffusion with a deep convolutional network.
We achieve unprecedented results in three commonly used benchmarks for guided depth super-resolution.
arXiv Detail & Related papers (2022-11-21T15:48:13Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Deep Posterior Distribution-based Embedding for Hyperspectral Image
Super-resolution [75.24345439401166]
This paper focuses on how to embed the high-dimensional spatial-spectral information of hyperspectral (HS) images efficiently and effectively.
We formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events.
Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable.
Experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-30T06:59:01Z) - Hybrid Pixel-Unshuffled Network for Lightweight Image Super-Resolution [64.54162195322246]
Convolutional neural network (CNN) has achieved great success on image super-resolution (SR)
Most deep CNN-based SR models take massive computations to obtain high performance.
We propose a novel Hybrid Pixel-Unshuffled Network (HPUN) by introducing an efficient and effective downsampling module into the SR task.
arXiv Detail & Related papers (2022-03-16T20:10:41Z) - Shallow Network Based on Depthwise Over-Parameterized Convolution for
Hyperspectral Image Classification [0.7329200485567825]
This letter proposes a shallow model for hyperspectral image classification (HSIC) using convolutional neural network (CNN) techniques.
The proposed method outperforms other state-of-the-art methods in terms of classification accuracy and computational efficiency.
arXiv Detail & Related papers (2021-12-01T03:10:02Z) - DDCNet: Deep Dilated Convolutional Neural Network for Dense Prediction [0.0]
A receptive field (ERF) and a higher resolution of spatial features within a network are essential for providing higher-resolution dense estimates.
We present a systemic approach to design network architectures that can provide a larger receptive field while maintaining a higher spatial feature resolution.
arXiv Detail & Related papers (2021-07-09T23:15:34Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.