Light Weight Residual Dense Attention Net for Spectral Reconstruction
from RGB Images
- URL: http://arxiv.org/abs/2004.06930v2
- Date: Sun, 19 Apr 2020 03:04:57 GMT
- Title: Light Weight Residual Dense Attention Net for Spectral Reconstruction
from RGB Images
- Authors: D.Sabari Nathan, K.Uma, D Synthiya Vinothini, B. Sathya Bama, S. M. Md
Mansoor Roomi
- Abstract summary: This work proposes a novel light weight network with very less number of parameters about 233,059 parameters based on Residual dense model with attention mechanism.
The network is trained with NTIRE 2020 challenge dataset and thus achieved 0.0457 MRAE metric value with less computational complexity.
- Score: 0.34998703934432684
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Hyperspectral Imaging is the acquisition of spectral and spatial information
of a particular scene. Capturing such information from a specialized
hyperspectral camera remains costly. Reconstructing such information from the
RGB image achieves a better solution in both classification and object
recognition tasks. This work proposes a novel light weight network with very
less number of parameters about 233,059 parameters based on Residual dense
model with attention mechanism to obtain this solution. This network uses
Coordination Convolutional Block to get the spatial information. The weights
from this block are shared by two independent feature extraction mechanisms,
one by dense feature extraction and the other by the multiscale hierarchical
feature extraction. Finally, the features from both the feature extraction
mechanisms are globally fused to produce the 31 spectral bands. The network is
trained with NTIRE 2020 challenge dataset and thus achieved 0.0457 MRAE metric
value with less computational complexity.
Related papers
- SSF-Net: Spatial-Spectral Fusion Network with Spectral Angle Awareness
for Hyperspectral Object Tracking [21.664141982246598]
Hyperspectral video (HSV) offers valuable spatial, spectral, and temporal information simultaneously.
Existing methods primarily focus on band regrouping and rely on RGB trackers for feature extraction.
In this paper, a spatial-spectral fusion network with spectral angle awareness (SST-Net) is proposed for hyperspectral (HS) object tracking.
arXiv Detail & Related papers (2024-03-09T09:37:13Z) - Efficient Segmentation with Texture in Ore Images Based on
Box-supervised Approach [6.6773975364173]
A box-supervised technique with texture features is proposed to identify complete and independent ores.
The proposed method achieves over 50 frames per second with a small model size of 21.6 MB.
The method maintains a high level of accuracy compared with the state-of-the-art approaches on ore image dataset.
arXiv Detail & Related papers (2023-11-10T08:28:22Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Object Detection in Hyperspectral Image via Unified Spectral-Spatial
Feature Aggregation [55.9217962930169]
We present S2ADet, an object detector that harnesses the rich spectral and spatial complementary information inherent in hyperspectral images.
S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results.
arXiv Detail & Related papers (2023-06-14T09:01:50Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.