Hyperspectral Classification Based on 3D Asymmetric Inception Network
with Data Fusion Transfer Learning
- URL: http://arxiv.org/abs/2002.04227v1
- Date: Tue, 11 Feb 2020 06:37:34 GMT
- Title: Hyperspectral Classification Based on 3D Asymmetric Inception Network
with Data Fusion Transfer Learning
- Authors: Haokui Zhang, Yu Liu, Bei Fang, Ying Li, Lingqiao Liu and Ian Reid
- Abstract summary: We first deliver a 3D asymmetric inception network, AINet, to overcome the overfitting problem.
With the emphasis on spectral signatures over spatial contexts of HSI data, AINet can convey and classify the features effectively.
- Score: 36.05574127972413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperspectral image(HSI) classification has been improved with convolutional
neural network(CNN) in very recent years. Being different from the RGB
datasets, different HSI datasets are generally captured by various remote
sensors and have different spectral configurations. Moreover, each HSI dataset
only contains very limited training samples and thus it is prone to overfitting
when using deep CNNs. In this paper, we first deliver a 3D asymmetric inception
network, AINet, to overcome the overfitting problem. With the emphasis on
spectral signatures over spatial contexts of HSI data, AINet can convey and
classify the features effectively. In addition, the proposed data fusion
transfer learning strategy is beneficial in boosting the classification
performance. Extensive experiments show that the proposed approach beat all of
the state-of-art methods on several HSI benchmarks, including Pavia University,
Indian Pines and Kennedy Space Center(KSC). Code can be found at:
https://github.com/UniLauX/AINet.
Related papers
- 3D-Convolution Guided Spectral-Spatial Transformer for Hyperspectral Image Classification [12.729885732069926]
Vision Transformers (ViTs) have shown promising classification performance over Convolutional Neural Networks (CNNs)
ViTs excel with sequential data, but they cannot extract spectral-spatial information like CNNs.
We propose a 3D-Convolution guided Spectral-Spatial Transformer (3D-ConvSST) for HSI classification.
arXiv Detail & Related papers (2024-04-20T03:39:54Z) - Superpixel Graph Contrastive Clustering with Semantic-Invariant
Augmentations for Hyperspectral Images [64.72242126879503]
Hyperspectral images (HSI) clustering is an important but challenging task.
We first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI.
We then design a superpixel graph contrastive clustering model to learn discriminative superpixel representations.
arXiv Detail & Related papers (2024-03-04T07:40:55Z) - DCN-T: Dual Context Network with Transformer for Hyperspectral Image
Classification [109.09061514799413]
Hyperspectral image (HSI) classification is challenging due to spatial variability caused by complex imaging conditions.
We propose a tri-spectral image generation pipeline that transforms HSI into high-quality tri-spectral images.
Our proposed method outperforms state-of-the-art methods for HSI classification.
arXiv Detail & Related papers (2023-04-19T18:32:52Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z) - 3D-ANAS: 3D Asymmetric Neural Architecture Search for Fast Hyperspectral
Image Classification [5.727964191623458]
Hyperspectral images involve abundant spectral and spatial information, playing an irreplaceable role in land-cover classification.
Recently, based on deep learning technologies, an increasing number of HSI classification approaches have been proposed, which demonstrate promising performance.
Previous studies suffer from two major drawbacks: 1) the architecture of most deep learning models is manually designed, relies on specialized knowledge, and is relatively tedious.
arXiv Detail & Related papers (2021-01-12T04:15:40Z) - Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer
Learning [67.40866334083941]
We propose an end-to-end 3-D lightweight convolutional neural network (CNN) for limited samples-based HSI classification.
Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost.
Our model achieves competitive performance for HSI classification compared to several state-of-the-art methods.
arXiv Detail & Related papers (2020-12-07T03:44:35Z) - Hyperspectral Image Classification with Spatial Consistence Using Fully
Convolutional Spatial Propagation Network [9.583523548244683]
Deep convolutional neural networks (CNNs) have shown impressive ability to represent hyperspectral images (HSIs)
We propose a novel end-to-end, pixels-to-pixels fully convolutional spatial propagation network (FCSPN) for HSI classification.
FCSPN consists of a 3D fully convolution network (3D-FCN) and a convolutional spatial propagation network (CSPN)
arXiv Detail & Related papers (2020-08-04T09:05:52Z) - Efficient Deep Learning of Non-local Features for Hyperspectral Image
Classification [28.72648031677868]
A deep fully convolutional network (FCN) with an efficient non-local module, named ENL-FCN, is proposed for hyperspectral image (HSI) classification.
The proposed framework, a deep FCN considers an entire HSI as input and extracts spectral-spatial information in a local receptive field.
By using a recurrent operation, each pixel's response is aggregated from all pixels of HSI.
arXiv Detail & Related papers (2020-08-02T19:13:22Z) - Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral
Super-Resolution [79.97180849505294]
We propose a novel coupled unmixing network with a cross-attention mechanism, CUCaNet, to enhance the spatial resolution of HSI.
Experiments are conducted on three widely-used HS-MS datasets in comparison with state-of-the-art HSI-SR models.
arXiv Detail & Related papers (2020-07-10T08:08:20Z) - Real-Time High-Performance Semantic Image Segmentation of Urban Street
Scenes [98.65457534223539]
We propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes.
The proposed method achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps.
arXiv Detail & Related papers (2020-03-11T08:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.