Unsupervised Roofline Extraction from True Orthophotos for LoD2 Building
Model Reconstruction
- URL: http://arxiv.org/abs/2310.01067v1
- Date: Mon, 2 Oct 2023 10:23:08 GMT
- Title: Unsupervised Roofline Extraction from True Orthophotos for LoD2 Building
Model Reconstruction
- Authors: Weixiao Gao, Ravi Peters, Jantien Stoter
- Abstract summary: This paper presents a method for extracting rooflines from true orthophotos using line detection for the reconstruction of building models at the LoD2 level.
The method is superior to existing plane detection-based methods and state-of-the-art deep learning methods in terms of the accuracy and completeness of the reconstructed building.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper discusses the reconstruction of LoD2 building models from 2D and
3D data for large-scale urban environments. Traditional methods involve the use
of LiDAR point clouds, but due to high costs and long intervals associated with
acquiring such data for rapidly developing areas, researchers have started
exploring the use of point clouds generated from (oblique) aerial images.
However, using such point clouds for traditional plane detection-based methods
can result in significant errors and introduce noise into the reconstructed
building models. To address this, this paper presents a method for extracting
rooflines from true orthophotos using line detection for the reconstruction of
building models at the LoD2 level. The approach is able to extract relatively
complete rooflines without the need for pre-labeled training data or
pre-trained models. These lines can directly be used in the LoD2 building model
reconstruction process. The method is superior to existing plane
detection-based methods and state-of-the-art deep learning methods in terms of
the accuracy and completeness of the reconstructed building. Our source code is
available at https://github.com/tudelft3d/Roofline-extraction-from-orthophotos.
Related papers
- PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - Few-shot point cloud reconstruction and denoising via learned Guassian splats renderings and fine-tuned diffusion features [52.62053703535824]
We propose a method to reconstruct point clouds from few images and to denoise point clouds from their rendering.
To improve reconstruction in constraint settings, we regularize the training of a differentiable with hybrid surface and appearance.
We demonstrate how these learned filters can be used to remove point cloud noise coming without 3D supervision.
arXiv Detail & Related papers (2024-04-01T13:38:16Z) - Point2Building: Reconstructing Buildings from Airborne LiDAR Point Clouds [23.897507889025817]
We present a learning-based approach to reconstruct buildings as 3D polygonal meshes from airborne LiDAR point clouds.
Our model learns directly from the point cloud data, thereby reducing error propagation and increasing the fidelity of the reconstruction.
We experimentally validate our method on a collection of airborne LiDAR data of Zurich, Berlin and Tallinn.
arXiv Detail & Related papers (2024-03-04T15:46:50Z) - LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models [1.1965844936801797]
Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots.
We present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds.
Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks.
arXiv Detail & Related papers (2023-09-17T12:26:57Z) - Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models [97.58685709663287]
generative pre-training can boost the performance of fundamental models in 2D vision.
In 3D vision, the over-reliance on Transformer-based backbones and the unordered nature of point clouds have restricted the further development of generative pre-training.
We propose a novel 3D-to-2D generative pre-training method that is adaptable to any point cloud model.
arXiv Detail & Related papers (2023-07-27T16:07:03Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Combining visibility analysis and deep learning for refinement of
semantic 3D building models by conflict classification [3.2662392450935416]
We propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features.
In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels.
The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate faccade openings, which are reconstructed using a 3D model library.
arXiv Detail & Related papers (2023-03-10T16:01:30Z) - 3D Point Cloud Pre-training with Knowledge Distillation from 2D Images [128.40422211090078]
We propose a knowledge distillation method for 3D point cloud pre-trained models to acquire knowledge directly from the 2D representation learning model.
Specifically, we introduce a cross-attention mechanism to extract concept features from 3D point cloud and compare them with the semantic information from 2D images.
In this scheme, the point cloud pre-trained models learn directly from rich information contained in 2D teacher models.
arXiv Detail & Related papers (2022-12-17T23:21:04Z) - Translational Symmetry-Aware Facade Parsing for 3D Building
Reconstruction [11.263458202880038]
In this paper, we present a novel translational symmetry-based approach to improving the deep neural networks.
We propose a novel scheme to fuse anchor-free detection in a single stage network, which enables the efficient training and better convergence.
We employ an off-the-shelf rendering engine like Blender to reconstruct the realistic high-quality 3D models using procedural modeling.
arXiv Detail & Related papers (2021-06-02T03:10:51Z) - Curved Buildings Reconstruction from Airborne LiDAR Data by Matching and
Deforming Geometric Primitives [13.777047260469677]
We propose a new framework for curved building reconstruction via assembling and deforming geometric primitives.
The presented framework is validated on several highly curved buildings collected by various LiDAR in different cities.
arXiv Detail & Related papers (2020-03-22T16:05:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.