3D/2D regularized CNN feature hierarchy for Hyperspectral image
classification
- URL: http://arxiv.org/abs/2104.12136v1
- Date: Sun, 25 Apr 2021 11:26:56 GMT
- Title: 3D/2D regularized CNN feature hierarchy for Hyperspectral image
classification
- Authors: Muhammad Ahmad, Manuel Mazzara, and Salvatore Distefano
- Abstract summary: Convolutional Neural Networks (CNN) have been rigorously studied for Hyperspectral Image Classification (HSIC)
We propose an idea to enhance the generalization performance of a hybrid CNN for HSIC using soft labels.
We empirically show that in improving generalization performance, label smoothing also improves model calibration.
- Score: 1.2359001424473932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional Neural Networks (CNN) have been rigorously studied for
Hyperspectral Image Classification (HSIC) and are known to be effective in
exploiting joint spatial-spectral information with the expense of lower
generalization performance and learning speed due to the hard labels and
non-uniform distribution over labels. Several regularization techniques have
been used to overcome the aforesaid issues. However, sometimes models learn to
predict the samples extremely confidently which is not good from a
generalization point of view. Therefore, this paper proposed an idea to enhance
the generalization performance of a hybrid CNN for HSIC using soft labels that
are a weighted average of the hard labels and uniform distribution over ground
labels. The proposed method helps to prevent CNN from becoming over-confident.
We empirically show that in improving generalization performance, label
smoothing also improves model calibration which significantly improves
beam-search. Several publicly available Hyperspectral datasets are used to
validate the experimental evaluation which reveals improved generalization
performance, statistical significance, and computational complexity as compared
to the state-of-the-art models. The code will be made available at
https://github.com/mahmad00.
Related papers
- Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Adaptive Label Smoothing To Regularize Large-Scale Graph Training [46.00927775402987]
We propose the adaptive label smoothing (ALS) method to replace the one-hot hard labels with smoothed ones.
ALS propagates node labels to aggregate the neighborhood label distribution in a pre-processing step, and then updates the optimal smoothed labels online to adapt to specific graph structure.
arXiv Detail & Related papers (2021-08-30T23:51:31Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z) - Regularization via Structural Label Smoothing [22.74769739125912]
Regularization is an effective way to promote the generalization performance of machine learning models.
In this paper, we focus on label smoothing, a form of output distribution regularization that prevents overfitting of a neural network.
We show that such label smoothing imposes a quantifiable bias in the Bayes error rate of the training data.
arXiv Detail & Related papers (2020-01-07T05:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.