Classification of Hyperspectral Images by Using Spectral Data and Fully
Connected Neural Network
- URL: http://arxiv.org/abs/2201.02821v1
- Date: Sat, 8 Jan 2022 12:45:48 GMT
- Title: Classification of Hyperspectral Images by Using Spectral Data and Fully
Connected Neural Network
- Authors: Zumray Dokur, Tamer Olmez
- Abstract summary: classification success over 90% has been achieved for hyperspectral images.
In this study, hyperspectral images of Indian pines, Salinas, Pavia centre, Pavia university and Botswana are classified.
An average accuracy of 97.5% is achieved for the test sets of all hyperspectral images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is observed that high classification performance is achieved for one- and
two-dimensional signals by using deep learning methods. In this context, most
researchers have tried to classify hyperspectral images by using deep learning
methods and classification success over 90% has been achieved for these images.
Deep neural networks (DNN) actually consist of two parts: i) Convolutional
neural network (CNN) and ii) fully connected neural network (FCNN). While CNN
determines the features, FCNN is used in classification. In classification of
the hyperspectral images, it is observed that almost all of the researchers
used 2D or 3D convolution filters on the spatial data beside spectral data
(features). It is convenient to use convolution filters on images or time
signals. In hyperspectral images, each pixel is represented by a signature
vector which consists of individual features that are independent of each
other. Since the order of the features in the vector can be changed, it doesn't
make sense to use convolution filters on these features as on time signals. At
the same time, since the hyperspectral images do not have a textural structure,
there is no need to use spatial data besides spectral data. In this study,
hyperspectral images of Indian pines, Salinas, Pavia centre, Pavia university
and Botswana are classified by using only fully connected neural network and
the spectral data with one dimensional. An average accuracy of 97.5% is
achieved for the test sets of all hyperspectral images.
Related papers
- Hybrid CNN Bi-LSTM neural network for Hyperspectral image classification [1.2691047660244332]
This paper proposes a neural network combining 3-D CNN, 2-D CNN and Bi-LSTM.
It could achieve 99.83, 99.98 and 100 percent accuracy using only 30 percent trainable parameters of the state-of-art model in IP, PU and SA datasets respectively.
arXiv Detail & Related papers (2024-02-15T15:46:13Z) - Why do CNNs excel at feature extraction? A mathematical explanation [53.807657273043446]
We introduce a novel model for image classification, based on feature extraction, that can be used to generate images resembling real-world datasets.
In our proof, we construct piecewise linear functions that detect the presence of features, and show that they can be realized by a convolutional network.
arXiv Detail & Related papers (2023-07-03T10:41:34Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - OSLO: On-the-Sphere Learning for Omnidirectional images and its
application to 360-degree image compression [59.58879331876508]
We study the learning of representation models for omnidirectional images and propose to use the properties of HEALPix uniform sampling of the sphere to redefine the mathematical tools used in deep learning models for omnidirectional images.
Our proposed on-the-sphere solution leads to a better compression gain that can save 13.7% of the bit rate compared to similar learned models applied to equirectangular images.
arXiv Detail & Related papers (2021-07-19T22:14:30Z) - SpectralNET: Exploring Spatial-Spectral WaveletCNN for Hyperspectral
Image Classification [0.0]
Hyperspectral Image (HSI) classification using Convolutional Neural Networks (CNN) is widely found in the current literature.
We propose SpectralNET, a wavelet CNN, which is a variation of 2D CNN for multi-resolution HSI classification.
arXiv Detail & Related papers (2021-04-01T08:45:15Z) - Learning Hybrid Representations for Automatic 3D Vessel Centerline
Extraction [57.74609918453932]
Automatic blood vessel extraction from 3D medical images is crucial for vascular disease diagnoses.
Existing methods may suffer from discontinuities of extracted vessels when segmenting such thin tubular structures from 3D images.
We argue that preserving the continuity of extracted vessels requires to take into account the global geometry.
We propose a hybrid representation learning approach to address this challenge.
arXiv Detail & Related papers (2020-12-14T05:22:49Z) - Harnessing spatial homogeneity of neuroimaging data: patch individual
filter layers for CNNs [0.0]
We suggest a new CNN architecture that combines the idea of hierarchical abstraction in neural networks with a prior on the spatial homogeneity of neuroimaging data.
By learning filters in individual image regions (patches) without sharing weights, PIF layers can learn abstract features faster and with fewer samples.
arXiv Detail & Related papers (2020-07-23T10:11:43Z) - Patch Based Classification of Remote Sensing Data: A Comparison of
2D-CNN, SVM and NN Classifiers [0.0]
We compare performance of patch based SVM and NN with that of a deep learning algorithms comprising of 2D-CNN and fully connected layers.
Results with both datasets suggest the effectiveness of patch based SVM and NN.
arXiv Detail & Related papers (2020-06-21T11:07:37Z) - Learning Hyperspectral Feature Extraction and Classification with
ResNeXt Network [2.9967206019304937]
Hyperspectral image (HSI) classification is a standard remote sensing task, in which each image pixel is given a label indicating the physical land-cover on the earth's surface.
The utilization of both the spectral and spatial cues in hyperspectral images has shown improved classification accuracy in hyperspectral image classification.
The use of only 3D Convolutional Neural Networks (3D-CNN) to extract both spatial and spectral cues from Hyperspectral images results in an explosion of parameters hence high computational cost.
We propose network architecture called the MixedSN that utilizes the 3D convolutions to modeling spectral-spatial information
arXiv Detail & Related papers (2020-02-07T01:54:15Z) - R-FCN: Object Detection via Region-based Fully Convolutional Networks [87.62557357527861]
We present region-based, fully convolutional networks for accurate and efficient object detection.
Our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart.
arXiv Detail & Related papers (2016-05-20T15:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.