Randomized Principal Component Analysis for Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2403.09117v2
- Date: Wed, 5 Jun 2024 07:49:25 GMT
- Title: Randomized Principal Component Analysis for Hyperspectral Image Classification
- Authors: Mustafa Ustuner,
- Abstract summary: The number of features was reduced to 20 and 30 for classification of two hyperspectral datasets.
The highest classification accuracies were obtained as 0.9925 and 0.9639 by LightGBM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high-dimensional feature space of the hyperspectral imagery poses major challenges to the processing and analysis of the hyperspectral data sets. In such a case, dimensionality reduction is necessary to decrease the computational complexity. The random projections open up new ways of dimensionality reduction, especially for large data sets. In this paper, the principal component analysis (PCA) and randomized principal component analysis (R-PCA) for the classification of hyperspectral images using support vector machines (SVM) and light gradient boosting machines (LightGBM) have been investigated. In this experimental research, the number of features was reduced to 20 and 30 for classification of two hyperspectral datasets (Indian Pines and Pavia University). The experimental results demonstrated that PCA outperformed R-PCA for SVM for both datasets, but received close accuracy values for LightGBM. The highest classification accuracies were obtained as 0.9925 and 0.9639 by LightGBM with original features for the Pavia University and Indian Pines, respectively.
Related papers
- Datacube segmentation via Deep Spectral Clustering [76.48544221010424]
Extended Vision techniques often pose a challenge in their interpretation.
The huge dimensionality of data cube spectra poses a complex task in its statistical interpretation.
In this paper, we explore the possibility of applying unsupervised clustering methods in encoded space.
A statistical dimensional reduction is performed by an ad hoc trained (Variational) AutoEncoder, while the clustering process is performed by a (learnable) iterative K-Means clustering algorithm.
arXiv Detail & Related papers (2024-01-31T09:31:28Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Supervised classification methods applied to airborne hyperspectral
images: Comparative study using mutual information [0.0]
This paper investigates the performance of four supervised learning algorithms, namely, Support Vector Machines SVM, Random Forest RF, K-Nearest Neighbors KNN and Linear Discriminant Analysis LDA.
The experiments have been performed on three real hyperspectral datasets taken from the NASA's Airborne Visible/Infrared Imaging Spectrometer Sensor AVIRIS and the Reflective Optics System Imaging Spectrometer ROSIS sensors.
arXiv Detail & Related papers (2022-10-27T13:39:08Z) - New wrapper method based on normalized mutual information for dimension
reduction and classification of hyperspectral images [0.0]
We propose a new wrapper method based on normalized mutual information (NMI) and error probability (PE)
Experiments have been performed on two challenging hyperspectral benchmarks datasets captured by the NASA's Airborne Visible/Infrared Imaging Spectrometer Sensor (AVIRIS)
arXiv Detail & Related papers (2022-10-25T21:17:11Z) - Optimal Discriminant Analysis in High-Dimensional Latent Factor Models [1.4213973379473654]
In high-dimensional classification problems, a commonly used approach is to first project the high-dimensional features into a lower dimensional space.
We formulate a latent-variable model with a hidden low-dimensional structure to justify this two-step procedure.
We propose a computationally efficient classifier that takes certain principal components (PCs) of the observed features as projections.
arXiv Detail & Related papers (2022-10-23T21:45:53Z) - HyperPCA: a Powerful Tool to Extract Elemental Maps from Noisy Data
Obtained in LIBS Mapping of Materials [7.648784748888189]
We introduce HyperPCA, a new analysis tool for hyperspectral images based on a sparse representation of the data.
We show that the method presents advantages both in quantity and quality of the information recovered, thus improving the physico-chemical characterisation of analysed surfaces.
arXiv Detail & Related papers (2021-11-30T07:52:44Z) - Class-Wise Principal Component Analysis for hyperspectral image feature
extraction [0.0]
This paper introduces the Class-wise Principal Component Analysis, a supervised feature extraction method for hyperspectral data.
Dimensionality reduction is an essential preprocessing step to complement a hyperspectral image classification task.
arXiv Detail & Related papers (2021-04-09T17:25:11Z) - Enhanced Principal Component Analysis under A Collaborative-Robust
Framework [89.28334359066258]
We introduce a general collaborative-robust weight learning framework that combines weight learning and robust loss in a non-trivial way.
Under the proposed framework, only a part of well-fitting samples are activated which indicates more importance during training, and others, whose errors are large, will not be ignored.
In particular, the negative effects of inactivated samples are alleviated by the robust loss function.
arXiv Detail & Related papers (2021-03-22T15:17:37Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.