Fuzziness-based Spatial-Spectral Class Discriminant Information
Preserving Active Learning for Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2005.14236v1
- Date: Thu, 28 May 2020 18:58:11 GMT
- Title: Fuzziness-based Spatial-Spectral Class Discriminant Information
Preserving Active Learning for Hyperspectral Image Classification
- Authors: Muhammad Ahmad
- Abstract summary: This work proposes a novel fuzziness-based spatial-spectral within and between for both local and global class discriminant information preserving method.
Experimental results on benchmark HSI datasets demonstrate the effectiveness of the FLG method on Generative, Extreme Learning Machine and Sparse Multinomial Logistic Regression.
- Score: 0.456877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional Active/Self/Interactive Learning for Hyperspectral Image
Classification (HSIC) increases the size of the training set without
considering the class scatters and randomness among the existing and new
samples. Second, very limited research has been carried out on joint
spectral-spatial information and finally, a minor but still worth mentioning is
the stopping criteria which not being much considered by the community.
Therefore, this work proposes a novel fuzziness-based spatial-spectral within
and between for both local and global class discriminant information preserving
(FLG) method. We first investigate a spatial prior fuzziness-based
misclassified sample information. We then compute the total local and global
for both within and between class information and formulate it in a
fine-grained manner. Later this information is fed to a discriminative
objective function to query the heterogeneous samples which eliminate the
randomness among the training samples. Experimental results on benchmark HSI
datasets demonstrate the effectiveness of the FLG method on Generative, Extreme
Learning Machine and Sparse Multinomial Logistic Regression (SMLR)-LORSAL
classifiers.
Related papers
- Hyperspectral Image Analysis with Subspace Learning-based One-Class
Classification [18.786429304405097]
Hyperspectral image (HSI) classification is an important task in many applications, such as environmental monitoring, medical imaging, and land use/land cover (LULC) classification.
In this study, we investigate recently proposed subspace learning methods for one-class classification (OCC)
In this way, there is no separate dimensionality reduction or feature selection procedure needed in the proposed classification framework.
Considering the imbalanced labels of the LULC classification problem and rich spectral information (high number of dimensions), the proposed classification approach is well-suited for HSI data.
arXiv Detail & Related papers (2023-04-19T15:17:05Z) - One-Class Risk Estimation for One-Class Hyperspectral Image
Classification [8.206701378422968]
Hyperspectral imagery (HSI) one-class classification is aimed at identifying a single target class from the HSI.
Deep learning-based methods are currently the mainstream to overcome distribution overlap in HSI multiclassification.
In this article, a weakly supervised deep HSI one-class classification, HOneCls, is proposed.
arXiv Detail & Related papers (2022-10-27T14:15:13Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Entropy-Based Uncertainty Calibration for Generalized Zero-Shot Learning [49.04790688256481]
The goal of generalized zero-shot learning (GZSL) is to recognise both seen and unseen classes.
Most GZSL methods typically learn to synthesise visual representations from semantic information on the unseen classes.
We propose a novel framework that leverages dual variational autoencoders with a triplet loss to learn discriminative latent features.
arXiv Detail & Related papers (2021-01-09T05:21:27Z) - A Boundary Based Out-of-Distribution Classifier for Generalized
Zero-Shot Learning [83.1490247844899]
Generalized Zero-Shot Learning (GZSL) is a challenging topic that has promising prospects in many realistic scenarios.
We propose a boundary based Out-of-Distribution (OOD) classifier which classifies the unseen and seen domains by only using seen samples for training.
We extensively validate our approach on five popular benchmark datasets including AWA1, AWA2, CUB, FLO and SUN.
arXiv Detail & Related papers (2020-08-09T11:27:19Z) - Progressive Cluster Purification for Unsupervised Feature Learning [48.87365358296371]
In unsupervised feature learning, sample specificity based methods ignore the inter-class information.
We propose a novel clustering based method, which excludes class inconsistent samples during progressive cluster formation.
Our approach, referred to as Progressive Cluster Purification (PCP), implements progressive clustering by gradually reducing the number of clusters during training.
arXiv Detail & Related papers (2020-07-06T08:11:03Z) - GIM: Gaussian Isolation Machines [40.7916016364212]
In many cases, neural network classifiers are exposed to input data that is outside of their training distribution data.
We present a novel hybrid (generative-discriminative) classifier aimed at solving the problem arising when OOD data is encountered.
The proposed GIM's novelty lies in its discriminative performance and generative capabilities, a combination of characteristics not usually seen in a single classifier.
arXiv Detail & Related papers (2020-02-06T09:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.