An unsupervised, open-source workflow for 2D and 3D building mapping
from airborne LiDAR data
- URL: http://arxiv.org/abs/2205.14585v3
- Date: Wed, 16 Aug 2023 03:40:41 GMT
- Title: An unsupervised, open-source workflow for 2D and 3D building mapping
from airborne LiDAR data
- Authors: Hunsoo Song, Jinha Jung
- Abstract summary: This study introduces an automated, open-source workflow for large-scale 2D and 3D building mapping utilizing airborne LiDAR data.
Our workflow operates entirely unsupervised, eliminating the need for any training procedures.
Our method's robustness has been rigorously and tested using an extensive dataset (> 550 km$2$), and further validated through comparison with deep learning-based and hand-digitized products.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the substantial demand for high-quality, large-area building maps, no
established open-source workflow for generating 2D and 3D maps currently
exists. This study introduces an automated, open-source workflow for
large-scale 2D and 3D building mapping utilizing airborne LiDAR data. Uniquely,
our workflow operates entirely unsupervised, eliminating the need for any
training procedures. We have integrated a specifically tailored DTM generation
algorithm into our workflow to prevent errors in complex urban landscapes,
especially around highways and overpasses. Through fine rasterization of LiDAR
point clouds, we've enhanced building-tree differentiation, reduced errors near
water bodies, and augmented computational efficiency by introducing a new
planarity calculation. Our workflow offers a practical and scalable solution
for the mass production of rasterized 2D and 3D building maps from raw airborne
LiDAR data. Also, we elaborate on the influence of parameters and potential
error sources to provide users with practical guidance. Our method's robustness
has been rigorously optimized and tested using an extensive dataset (> 550
km$^2$), and further validated through comparison with deep learning-based and
hand-digitized products. Notably, through these unparalleled, large-scale
comparisons, we offer a valuable analysis of large-scale building maps
generated via different methodologies, providing insightful evaluations of the
effectiveness of each approach. We anticipate that our highly scalable building
mapping workflow will facilitate the production of reliable 2D and 3D building
maps, fostering advances in large-scale urban analysis. The code will be
released upon publication.
Related papers
- Semi-supervised 3D Semantic Scene Completion with 2D Vision Foundation Model Guidance [11.090775523892074]
We introduce a novel semi-supervised framework to alleviate the dependency on densely annotated data.
Our approach leverages 2D foundation models to generate essential 3D scene geometric and semantic cues.
Our method achieves up to 85% of the fully-supervised performance using only 10% labeled data.
arXiv Detail & Related papers (2024-08-21T12:13:18Z) - Multi-Unit Floor Plan Recognition and Reconstruction Using Improved Semantic Segmentation of Raster-Wise Floor Plans [1.0436971860292366]
We propose two novel pixel-wise segmentation methods based on the MDA-Unet and MACU-Net architectures.
The proposed methods are compared with two other state-of-the-art techniques and several benchmark datasets.
On the commonly used CubiCasa benchmark dataset, our methods have achieved the mean F1 score of 0.86 over five examined classes.
arXiv Detail & Related papers (2024-08-02T18:36:45Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - Automatic Reconstruction of Semantic 3D Models from 2D Floor Plans [1.8581514902689347]
We present a pipeline for reconstruction of vectorized 3D models from scanned 2D plans.
The method presented state-of-the-art results in the public dataset CubiCasa5k.
arXiv Detail & Related papers (2023-06-02T16:06:42Z) - Efficient Quality Diversity Optimization of 3D Buildings through 2D
Pre-optimization [101.18253437732933]
Quality diversity algorithms can be used to create a diverse set of solutions to inform engineers' intuition.
But quality diversity is not efficient in very expensive problems, needing 100.000s of evaluations.
We show that we can produce better machine learning models by producing training data with quality diversity.
arXiv Detail & Related papers (2023-03-28T11:20:59Z) - sat2pc: Estimating Point Cloud of Building Roofs from 2D Satellite
Images [1.8884278918443564]
We propose sat2pc, a deep learning architecture that predicts the point of a building roof from a single 2D satellite image.
Our results show that sat2pc was able to outperform existing baselines by at least 18.6%.
arXiv Detail & Related papers (2022-05-25T03:24:40Z) - Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories [23.314557741879664]
We present Walk2Map, a data-driven approach to generate floor plans from trajectories of a person walking inside the rooms.
Thanks to advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones.
We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory.
arXiv Detail & Related papers (2021-02-27T16:29:09Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.