On the Importance of 3D Surface Information for Remote Sensing
Classification Tasks
- URL: http://arxiv.org/abs/2104.13969v1
- Date: Mon, 26 Apr 2021 19:55:51 GMT
- Title: On the Importance of 3D Surface Information for Remote Sensing
Classification Tasks
- Authors: Jan Petrich, Ryan Sander, Eliza Bradley, Adam Dawood, Shawn Hough
- Abstract summary: Adding 3D surface information to RGB imagery can provide crucial geometric information for semantic classes such as buildings.
We assess classification performance using multispectral imagery from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D Semantic Labeling contest and the United States Special Operations Command (USSOCOM) Urban 3D Challenge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been a surge in remote sensing machine learning applications that
operate on data from active or passive sensors as well as multi-sensor
combinations (Ma et al. (2019)). Despite this surge, however, there has been
relatively little study on the comparative value of 3D surface information for
machine learning classification tasks. Adding 3D surface information to RGB
imagery can provide crucial geometric information for semantic classes such as
buildings, and can thus improve out-of-sample predictive performance. In this
paper, we examine in-sample and out-of-sample classification performance of
Fully Convolutional Neural Networks (FCNNs) and Support Vector Machines (SVMs)
trained with and without 3D normalized digital surface model (nDSM)
information. We assess classification performance using multispectral imagery
from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D
Semantic Labeling contest and the United States Special Operations Command
(USSOCOM) Urban 3D Challenge. We find that providing RGB classifiers with
additional 3D nDSM information results in little increase in in-sample
classification performance, suggesting that spectral information alone may be
sufficient for the given classification tasks. However, we observe that
providing these RGB classifiers with additional nDSM information leads to
significant gains in out-of-sample predictive performance. Specifically, we
observe an average improvement in out-of-sample all-class accuracy of 14.4% on
the ISPRS dataset and an average improvement in out-of-sample F1 score of 8.6%
on the USSOCOM dataset. In addition, the experiments establish that nDSM
information is critical in machine learning and classification settings that
face training sample scarcity.
Related papers
- ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - Generative Adversarial Networks for Imputing Sparse Learning Performance [3.0350058108125646]
This paper proposes using the Generative Adversarial Imputation Networks (GAIN) framework to impute sparse learning performance data.
Our customized GAIN-based method computational process imputes sparse data in a 3D tensor space.
This finding enhances comprehensive learning data modeling and analytics in AI-based education.
arXiv Detail & Related papers (2024-07-26T17:09:48Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with
Pre-trained Vision-Language Models [62.663113296987085]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.
We introduce two novel components: the Redundant Feature Eliminator (RFE) and the Spatial Noise Compensator (SNC)
Considering the imbalance in existing 3D datasets, we also propose new evaluation metrics that offer a more nuanced assessment of a 3D FSCIL model.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - Supervised classification methods applied to airborne hyperspectral
images: Comparative study using mutual information [0.0]
This paper investigates the performance of four supervised learning algorithms, namely, Support Vector Machines SVM, Random Forest RF, K-Nearest Neighbors KNN and Linear Discriminant Analysis LDA.
The experiments have been performed on three real hyperspectral datasets taken from the NASA's Airborne Visible/Infrared Imaging Spectrometer Sensor AVIRIS and the Reflective Optics System Imaging Spectrometer ROSIS sensors.
arXiv Detail & Related papers (2022-10-27T13:39:08Z) - Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer
Learning [67.40866334083941]
We propose an end-to-end 3-D lightweight convolutional neural network (CNN) for limited samples-based HSI classification.
Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost.
Our model achieves competitive performance for HSI classification compared to several state-of-the-art methods.
arXiv Detail & Related papers (2020-12-07T03:44:35Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z) - Hyperspectral Classification Based on 3D Asymmetric Inception Network
with Data Fusion Transfer Learning [36.05574127972413]
We first deliver a 3D asymmetric inception network, AINet, to overcome the overfitting problem.
With the emphasis on spectral signatures over spatial contexts of HSI data, AINet can convey and classify the features effectively.
arXiv Detail & Related papers (2020-02-11T06:37:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.