A Nearest Neighbor Network to Extract Digital Terrain Models from 3D
Point Clouds
- URL: http://arxiv.org/abs/2005.10745v2
- Date: Sat, 20 Jun 2020 19:51:13 GMT
- Title: A Nearest Neighbor Network to Extract Digital Terrain Models from 3D
Point Clouds
- Authors: Mohammed Yousefhussien, David J. Kelbe, and Carl Salvaggio
- Abstract summary: We present an algorithm that operates on 3D-point clouds and estimates the underlying DTM for the scene using an end-to-end approach.
Our model learns neighborhood information and seamlessly integrates this with point-wise and block-wise global features.
- Score: 1.6249267147413524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When 3D-point clouds from overhead sensors are used as input to remote
sensing data exploitation pipelines, a large amount of effort is devoted to
data preparation. Among the multiple stages of the preprocessing chain,
estimating the Digital Terrain Model (DTM) model is considered to be of a high
importance; however, this remains a challenge, especially for raw point clouds
derived from optical imagery. Current algorithms estimate the ground points
using either a set of geometrical rules that require tuning multiple parameters
and human interaction, or cast the problem as a binary classification machine
learning task where ground and non-ground classes are found. In contrast, here
we present an algorithm that directly operates on 3D-point clouds and estimate
the underlying DTM for the scene using an end-to-end approach without the need
to classify points into ground and non-ground cover types. Our model learns
neighborhood information and seamlessly integrates this with point-wise and
block-wise global features. We validate our model using the ISPRS 3D Semantic
Labeling Contest LiDAR data, as well as three scenes generated using dense
stereo matching, representative of high-rise buildings, lower urban structures,
and a dense old-city residential area. We compare our findings with two widely
used software packages for DTM extraction, namely ENVI and LAStools. Our
preliminary results show that the proposed method is able to achieve an overall
Mean Absolute Error of 11.5% compared to 29% and 16% for ENVI and LAStools.
Related papers
- Leveraging Large-Scale Pretrained Vision Foundation Models for
Label-Efficient 3D Point Cloud Segmentation [67.07112533415116]
We present a novel framework that adapts various foundational models for the 3D point cloud segmentation task.
Our approach involves making initial predictions of 2D semantic masks using different large vision models.
To generate robust 3D semantic pseudo labels, we introduce a semantic label fusion strategy that effectively combines all the results via voting.
arXiv Detail & Related papers (2023-11-03T15:41:15Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Tinto: Multisensor Benchmark for 3D Hyperspectral Point Cloud
Segmentation in the Geosciences [9.899276249773425]
We present Tinto, a benchmark digital outcrop dataset designed to facilitate the development and validation of deep learning approaches for geological mapping.
Tinto comprises two complementary sets 1) a real digital outcrop model from Corta Atalaya (Spain), with spectral attributes and ground-truth data, and 2) a synthetic twin that uses latent features in the original datasets to reconstruct realistic spectral data from the ground-truth.
We used these datasets to explore the abilities of different deep learning approaches for automated geological mapping.
arXiv Detail & Related papers (2023-05-17T03:24:08Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - H3D: Benchmark on Semantic Segmentation of High-Resolution 3D Point
Clouds and textured Meshes from UAV LiDAR and Multi-View-Stereo [4.263987603222371]
This paper introduces a 3D dataset which is unique in three ways.
It depicts the village of Hessigheim (Germany) henceforth referred to as H3D.
It is designed for promoting research in the field of 3D data analysis on one hand and to evaluate and rank emerging approaches.
arXiv Detail & Related papers (2021-02-10T09:33:48Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Exploring Deep 3D Spatial Encodings for Large-Scale 3D Scene
Understanding [19.134536179555102]
We propose an alternative approach to overcome the limitations of CNN based approaches by encoding the spatial features of raw 3D point clouds into undirected graph models.
The proposed method achieves on par state-of-the-art accuracy with improved training time and model stability thus indicating strong potential for further research.
arXiv Detail & Related papers (2020-11-29T12:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.