Efficient Deep Learning of Non-local Features for Hyperspectral Image
Classification
- URL: http://arxiv.org/abs/2008.00542v1
- Date: Sun, 2 Aug 2020 19:13:22 GMT
- Title: Efficient Deep Learning of Non-local Features for Hyperspectral Image
Classification
- Authors: Yu Shen, Sijie Zhu, Chen Chen, Qian Du, Liang Xiao, Jianyu Chen, Delu
Pan
- Abstract summary: A deep fully convolutional network (FCN) with an efficient non-local module, named ENL-FCN, is proposed for hyperspectral image (HSI) classification.
The proposed framework, a deep FCN considers an entire HSI as input and extracts spectral-spatial information in a local receptive field.
By using a recurrent operation, each pixel's response is aggregated from all pixels of HSI.
- Score: 28.72648031677868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based methods, such as Convolution Neural Network (CNN), have
demonstrated their efficiency in hyperspectral image (HSI) classification.
These methods can automatically learn spectral-spatial discriminative features
within local patches. However, for each pixel in an HSI, it is not only related
to its nearby pixels but also has connections to pixels far away from itself.
Therefore, to incorporate the long-range contextual information, a deep fully
convolutional network (FCN) with an efficient non-local module, named ENL-FCN,
is proposed for HSI classification. In the proposed framework, a deep FCN
considers an entire HSI as input and extracts spectral-spatial information in a
local receptive field. The efficient non-local module is embedded in the
network as a learning unit to capture the long-range contextual information.
Different from the traditional non-local neural networks, the long-range
contextual information is extracted in a specially designed criss-cross path
for computation efficiency. Furthermore, by using a recurrent operation, each
pixel's response is aggregated from all pixels of HSI. The benefits of our
proposed ENL-FCN are threefold: 1) the long-range contextual information is
incorporated effectively, 2) the efficient module can be freely embedded in a
deep neural network in a plug-and-play fashion, and 3) it has much fewer
learning parameters and requires less computational resources. The experiments
conducted on three popular HSI datasets demonstrate that the proposed method
achieves state-of-the-art classification performance with lower computational
cost in comparison with several leading deep neural networks for HSI.
Related papers
- LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Increasing the Accuracy of a Neural Network Using Frequency Selective
Mesh-to-Grid Resampling [4.211128681972148]
We propose the use of keypoint frequency selective mesh-to-grid resampling (FSMR) for the processing of input data for neural networks.
We show that depending on the network architecture and classification task the application of FSMR during training aids learning process.
The classification accuracy can be increased by up to 4.31 percentage points for ResNet50 and the Oxflower17 dataset.
arXiv Detail & Related papers (2022-09-28T21:34:47Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z) - RSI-Net: Two-Stream Deep Neural Network Integrating GCN and Atrous CNN
for Semantic Segmentation of High-resolution Remote Sensing Images [3.468780866037609]
Two-stream deep neural network for semantic segmentation of remote sensing images (RSI-Net) is proposed in this paper.
Experiments are implemented on the Vaihingen, Potsdam and Gaofen RSI datasets.
Results demonstrate the superior performance of RSI-Net in terms of overall accuracy, F1 score and kappa coefficient when compared with six state-of-the-art RSI semantic segmentation methods.
arXiv Detail & Related papers (2021-09-19T15:57:20Z) - Hyperspectral Image Classification with Spatial Consistence Using Fully
Convolutional Spatial Propagation Network [9.583523548244683]
Deep convolutional neural networks (CNNs) have shown impressive ability to represent hyperspectral images (HSIs)
We propose a novel end-to-end, pixels-to-pixels fully convolutional spatial propagation network (FCSPN) for HSI classification.
FCSPN consists of a 3D fully convolution network (3D-FCN) and a convolutional spatial propagation network (CSPN)
arXiv Detail & Related papers (2020-08-04T09:05:52Z) - Real-Time High-Performance Semantic Image Segmentation of Urban Street
Scenes [98.65457534223539]
We propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes.
The proposed method achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps.
arXiv Detail & Related papers (2020-03-11T08:45:53Z) - Hyperspectral Classification Based on 3D Asymmetric Inception Network
with Data Fusion Transfer Learning [36.05574127972413]
We first deliver a 3D asymmetric inception network, AINet, to overcome the overfitting problem.
With the emphasis on spectral signatures over spatial contexts of HSI data, AINet can convey and classify the features effectively.
arXiv Detail & Related papers (2020-02-11T06:37:34Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.