HYPER-SNN: Towards Energy-efficient Quantized Deep Spiking Neural
Networks for Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2107.11979v2
- Date: Wed, 28 Jul 2021 06:17:55 GMT
- Title: HYPER-SNN: Towards Energy-efficient Quantized Deep Spiking Neural
Networks for Hyperspectral Image Classification
- Authors: Gourav Datta, Souvik Kundu, Akhilesh R. Jaiswal, Peter A. Beerel
- Abstract summary: Spiking Neural Networks (SNNs) are trained with quantization-aware gradient descent to optimize weights, membrane leak, and firing thresholds.
During both training and inference, the analog pixel values of a HSI are directly applied to the input layer of the SNN without the need to convert to a spike-train.
We evaluate our proposal using three HSI datasets on a 3-D and a 3-D/2-D hybrid convolutional architecture.
- Score: 5.094623170336122
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyper spectral images (HSI) provide rich spectral and spatial information
across a series of contiguous spectral bands. However, the accurate processing
of the spectral and spatial correlation between the bands requires the use of
energy-expensive 3-D Convolutional Neural Networks (CNNs). To address this
challenge, we propose the use of Spiking Neural Networks (SNNs) that are
generated from iso-architecture CNNs and trained with quantization-aware
gradient descent to optimize their weights, membrane leak, and firing
thresholds. During both training and inference, the analog pixel values of a
HSI are directly applied to the input layer of the SNN without the need to
convert to a spike-train. The reduced latency of our training technique
combined with high activation sparsity yields significant improvements in
computational efficiency. We evaluate our proposal using three HSI datasets on
a 3-D and a 3-D/2-D hybrid convolutional architecture. We achieve overall
accuracy, average accuracy, and kappa coefficient of 98.68%, 98.34%, and 98.20%
respectively with 5 time steps (inference latency) and 6-bit weight
quantization on the Indian Pines dataset. In particular, our models achieved
accuracies similar to state-of-the-art (SOTA) with 560.6 and 44.8 times less
compute energy on average over three HSI datasets than an iso-architecture
full-precision and 6-bit quantized CNN, respectively.
Related papers
- TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials [1.1970409518725493]
We focus on interfacing these networks with event-based data to perform online classification and detection with low latency.
We experimented with three event-based benchmarks and obtained state-of-the-art results on all three by large margins with significantly smaller memory and compute costs.
arXiv Detail & Related papers (2024-05-20T17:06:24Z) - Sharpend Cosine Similarity based Neural Network for Hyperspectral Image
Classification [0.456877715768796]
Hyperspectral Image Classification (HSIC) is a difficult task due to high inter and intra-class similarity and variability, nested regions, and overlapping.
2D Convolutional Neural Networks (CNN) emerged as a viable network whereas, 3D CNNs are a better alternative due to accurate classification.
This paper introduces Sharpened Cosine Similarity (SCS) concept as an alternative to convolutions in a Neural Network for HSIC.
arXiv Detail & Related papers (2023-05-26T07:04:00Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z) - A New Backbone for Hyperspectral Image Reconstruction [90.48427561874402]
3D hyperspectral image (HSI) reconstruction refers to inverse process of snapshot compressive imaging.
Proposal is for a Spatial/Spectral Invariant Residual U-Net, namely SSI-ResU-Net.
We show that SSI-ResU-Net achieves competing performance with over 77.3% reduction in terms of floating-point operations.
arXiv Detail & Related papers (2021-08-17T16:20:51Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Hyperspectral Image Classification: Artifacts of Dimension Reduction on
Hybrid CNN [1.2875323263074796]
2D and 3D CNN models have proved highly efficient in exploiting the spatial and spectral information of Hyperspectral Images.
This work proposed a lightweight CNN (3D followed by 2D-CNN) model which significantly reduces the computational cost.
arXiv Detail & Related papers (2021-01-25T18:43:57Z) - Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer
Learning [67.40866334083941]
We propose an end-to-end 3-D lightweight convolutional neural network (CNN) for limited samples-based HSI classification.
Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost.
Our model achieves competitive performance for HSI classification compared to several state-of-the-art methods.
arXiv Detail & Related papers (2020-12-07T03:44:35Z) - Hyperspectral Image Classification with Spatial Consistence Using Fully
Convolutional Spatial Propagation Network [9.583523548244683]
Deep convolutional neural networks (CNNs) have shown impressive ability to represent hyperspectral images (HSIs)
We propose a novel end-to-end, pixels-to-pixels fully convolutional spatial propagation network (FCSPN) for HSI classification.
FCSPN consists of a 3D fully convolution network (3D-FCN) and a convolutional spatial propagation network (CSPN)
arXiv Detail & Related papers (2020-08-04T09:05:52Z) - A Fast 3D CNN for Hyperspectral Image Classification [0.456877715768796]
Hyperspectral imaging (HSI) has been extensively utilized for a number of real-world applications.
A 2D Convolutional Neural Network (CNN) is a viable approach whereby HSIC highly depends on both Spectral-Spatial information.
This work proposed a 3D CNN model that utilizes both spatial-spectral feature maps to attain good performance.
arXiv Detail & Related papers (2020-04-29T12:57:36Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.