Training Methods of Multi-label Prediction Classifiers for Hyperspectral
Remote Sensing Images
- URL: http://arxiv.org/abs/2301.06874v2
- Date: Thu, 26 Oct 2023 13:05:07 GMT
- Title: Training Methods of Multi-label Prediction Classifiers for Hyperspectral
Remote Sensing Images
- Authors: Salma Haidar and Jos\'e Oramas
- Abstract summary: We propose a multi-label, patch-level classification method for hyperspectral remote sensing images.
We use patches of reduced spatial dimension and a complete spectral depth extracted from the remote sensing images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With their combined spectral depth and geometric resolution, hyperspectral
remote sensing images embed a wealth of complex, non-linear information that
challenges traditional computer vision techniques. Yet, deep learning methods
known for their representation learning capabilities prove more suitable for
handling such complexities. Unlike applications that focus on single-label,
pixel-level classification methods for hyperspectral remote sensing images, we
propose a multi-label, patch-level classification method based on a
two-component deep-learning network. We use patches of reduced spatial
dimension and a complete spectral depth extracted from the remote sensing
images. Additionally, we investigate three training schemes for our network:
Iterative, Joint, and Cascade. Experiments suggest that the Joint scheme is the
best-performing scheme; however, its application requires an expensive search
for the best weight combination of the loss constituents. The Iterative scheme
enables the sharing of features between the two parts of the network at the
early stages of training. It performs better on complex data with multi-labels.
Further experiments showed that methods designed with different architectures
performed well when trained on patches extracted and labeled according to our
sampling method.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - DiverseNet: Decision Diversified Semi-supervised Semantic Segmentation Networks for Remote Sensing Imagery [17.690698736544626]
We propose DiverseNet which explores multi-head and multi-model semi-supervised learning algorithms by simultaneously enhancing precision and diversity during training.
The two proposed methods in the DiverseNet family, namely DiverseHead and DiverseModel, both achieve the better semantic segmentation performance in four widely utilised remote sensing imagery data sets.
arXiv Detail & Related papers (2023-11-22T22:20:10Z) - Scene Change Detection Using Multiscale Cascade Residual Convolutional
Neural Networks [0.0]
Scene change detection is an image processing problem related to partitioning pixels of a digital image into foreground and background regions.
In this work, we propose a novel Multiscale Residual Processing Module, with a Convolutional Neural Network that integrates a Residual Processing Module.
Experiments conducted on two different datasets support the overall effectiveness of the proposed approach, achieving an average overall effectiveness of $boldsymbol0.9622$ and $boldsymbol0.9664$ over Change Detection 2014 and PetrobrasROUTES datasets respectively.
arXiv Detail & Related papers (2022-12-20T16:48:51Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Towards Interpretable Deep Metric Learning with Structural Matching [86.16700459215383]
We present a deep interpretable metric learning (DIML) method for more transparent embedding learning.
Our method is model-agnostic, which can be applied to off-the-shelf backbone networks and metric learning methods.
We evaluate our method on three major benchmarks of deep metric learning including CUB200-2011, Cars196, and Stanford Online Products.
arXiv Detail & Related papers (2021-08-12T17:59:09Z) - Unifying Remote Sensing Image Retrieval and Classification with Robust
Fine-tuning [3.6526118822907594]
We aim at unifying remote sensing image retrieval and classification with a new large-scale training and testing dataset, SF300.
We show that our framework systematically achieves a boost of retrieval and classification performance on nine different datasets compared to an ImageNet pretrained baseline.
arXiv Detail & Related papers (2021-02-26T11:01:30Z) - Spectral Analysis Network for Deep Representation Learning and Image
Clustering [53.415803942270685]
This paper proposes a new network structure for unsupervised deep representation learning based on spectral analysis.
It can identify the local similarities among images in patch level and thus more robust against occlusion.
It can learn more clustering-friendly representations and is capable to reveal the deep correlations among data samples.
arXiv Detail & Related papers (2020-09-11T05:07:15Z) - Sparse Coding Driven Deep Decision Tree Ensembles for Nuclear
Segmentation in Digital Pathology Images [15.236873250912062]
We propose an easily trained yet powerful representation learning approach with performance highly competitive to deep neural networks in a digital pathology image segmentation task.
The method, called sparse coding driven deep decision tree ensembles that we abbreviate as ScD2TE, provides a new perspective on representation learning.
arXiv Detail & Related papers (2020-08-13T02:59:31Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Contextual Encoder-Decoder Network for Visual Saliency Prediction [42.047816176307066]
We propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task.
We combine the resulting representations with global scene information for accurately predicting visual saliency.
Compared to state of the art approaches, the network is based on a lightweight image classification backbone.
arXiv Detail & Related papers (2019-02-18T16:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.