Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer
Learning
- URL: http://arxiv.org/abs/2012.03439v1
- Date: Mon, 7 Dec 2020 03:44:35 GMT
- Title: Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer
Learning
- Authors: Haokui Zhang, Ying Li, Yenan Jiang, Peng Wang, Qiang Shen, and Chunhua
Shen
- Abstract summary: We propose an end-to-end 3-D lightweight convolutional neural network (CNN) for limited samples-based HSI classification.
Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost.
Our model achieves competitive performance for HSI classification compared to several state-of-the-art methods.
- Score: 67.40866334083941
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, hyperspectral image (HSI) classification approaches based on deep
learning (DL) models have been proposed and shown promising performance.
However, because of very limited available training samples and massive model
parameters, DL methods may suffer from overfitting. In this paper, we propose
an end-to-end 3-D lightweight convolutional neural network (CNN) (abbreviated
as 3-D-LWNet) for limited samples-based HSI classification. Compared with
conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network
structure, less parameters, and lower computation cost, resulting in better
classification performance. To further alleviate the small sample problem, we
also propose two transfer learning strategies: 1) cross-sensor strategy, in
which we pretrain a 3-D model in the source HSI data sets containing a greater
number of labeled samples and then transfer it to the target HSI data sets and
2) cross-modal strategy, in which we pretrain a 3-D model in the 2-D RGB image
data sets containing a large number of samples and then transfer it to the
target HSI data sets. In contrast to previous approaches, we do not impose
restrictions over the source data sets, in which they do not have to be
collected by the same sensors as the target data sets. Experiments on three
public HSI data sets captured by different sensors demonstrate that our model
achieves competitive performance for HSI classification compared to several
state-of-the-art methods
Related papers
- Large Generative Model Assisted 3D Semantic Communication [51.17527319441436]
We propose a Generative AI Model assisted 3D SC (GAM-3DSC) system.
First, we introduce a 3D Semantic Extractor (3DSE) to extract key semantics from a 3D scenario based on user requirements.
We then present an Adaptive Semantic Compression Model (ASCM) for encoding these multi-perspective images.
Finally, we design a conditional Generative adversarial network and Diffusion model aided-Channel Estimation (GDCE) to estimate and refine the Channel State Information (CSI) of physical channels.
arXiv Detail & Related papers (2024-03-09T03:33:07Z) - Dual-Perspective Knowledge Enrichment for Semi-Supervised 3D Object
Detection [55.210991151015534]
We present a novel Dual-Perspective Knowledge Enrichment approach named DPKE for semi-supervised 3D object detection.
Our DPKE enriches the knowledge of limited training data, particularly unlabeled data, from two perspectives: data-perspective and feature-perspective.
arXiv Detail & Related papers (2024-01-10T08:56:07Z) - Uni3D: A Unified Baseline for Multi-dataset 3D Object Detection [34.2238222373818]
Current 3D object detection models follow a single dataset-specific training and testing paradigm.
In this paper, we study the task of training a unified 3D detector from multiple datasets.
We present a Uni3D which leverages a simple data-level correction operation and a designed semantic-level coupling-and-recoupling module.
arXiv Detail & Related papers (2023-03-13T05:54:13Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z) - On the Importance of 3D Surface Information for Remote Sensing
Classification Tasks [0.0]
Adding 3D surface information to RGB imagery can provide crucial geometric information for semantic classes such as buildings.
We assess classification performance using multispectral imagery from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D Semantic Labeling contest and the United States Special Operations Command (USSOCOM) Urban 3D Challenge.
arXiv Detail & Related papers (2021-04-26T19:55:51Z) - 3D-ANAS: 3D Asymmetric Neural Architecture Search for Fast Hyperspectral
Image Classification [5.727964191623458]
Hyperspectral images involve abundant spectral and spatial information, playing an irreplaceable role in land-cover classification.
Recently, based on deep learning technologies, an increasing number of HSI classification approaches have been proposed, which demonstrate promising performance.
Previous studies suffer from two major drawbacks: 1) the architecture of most deep learning models is manually designed, relies on specialized knowledge, and is relatively tedious.
arXiv Detail & Related papers (2021-01-12T04:15:40Z) - LiteDepthwiseNet: An Extreme Lightweight Network for Hyperspectral Image
Classification [9.571458051525768]
This paper proposes a new network architecture, LiteDepthwiseNet, for hyperspectral image (HSI) classification.
LiteDepthwiseNet decomposes standard convolution into depthwise convolution and pointwise convolution, which can achieve high classification performance with minimal parameters.
Experiment results on three benchmark hyperspectral datasets show that LiteDepthwiseNet achieves state-of-the-art performance with a very small number of parameters and low computational cost.
arXiv Detail & Related papers (2020-10-15T13:12:17Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - 3DSSD: Point-based 3D Single Stage Object Detector [61.67928229961813]
We present a point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency.
Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well.
arXiv Detail & Related papers (2020-02-24T12:01:58Z) - Hyperspectral Classification Based on 3D Asymmetric Inception Network
with Data Fusion Transfer Learning [36.05574127972413]
We first deliver a 3D asymmetric inception network, AINet, to overcome the overfitting problem.
With the emphasis on spectral signatures over spatial contexts of HSI data, AINet can convey and classify the features effectively.
arXiv Detail & Related papers (2020-02-11T06:37:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.