Combining visibility analysis and deep learning for refinement of
semantic 3D building models by conflict classification
- URL: http://arxiv.org/abs/2303.05998v1
- Date: Fri, 10 Mar 2023 16:01:30 GMT
- Title: Combining visibility analysis and deep learning for refinement of
semantic 3D building models by conflict classification
- Authors: Olaf Wysocki, Eleonora Grilli, Ludwig Hoegner, Uwe Stilla
- Abstract summary: We propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features.
In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels.
The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate faccade openings, which are reconstructed using a 3D model library.
- Score: 3.2662392450935416
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic 3D building models are widely available and used in numerous
applications. Such 3D building models display rich semantics but no fa\c{c}ade
openings, chiefly owing to their aerial acquisition techniques. Hence, refining
models' fa\c{c}ades using dense, street-level, terrestrial point clouds seems a
promising strategy. In this paper, we propose a method of combining visibility
analysis and neural networks for enriching 3D models with window and door
features. In the method, occupancy voxels are fused with classified point
clouds, which provides semantics to voxels. Voxels are also used to identify
conflicts between laser observations and 3D models. The semantic voxels and
conflicts are combined in a Bayesian network to classify and delineate
fa\c{c}ade openings, which are reconstructed using a 3D model library.
Unaffected building semantics is preserved while the updated one is added,
thereby upgrading the building model to LoD3. Moreover, Bayesian network
results are back-projected onto point clouds to improve points' classification
accuracy. We tested our method on a municipal CityGML LoD2 repository and the
open point cloud datasets: TUM-MLS-2016 and TUM-FA\c{C}ADE. Validation results
revealed that the method improves the accuracy of point cloud semantic
segmentation and upgrades buildings with fa\c{c}ade elements. The method can be
applied to enhance the accuracy of urban simulations and facilitate the
development of semantic segmentation algorithms.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - Leveraging Large-Scale Pretrained Vision Foundation Models for
Label-Efficient 3D Point Cloud Segmentation [67.07112533415116]
We present a novel framework that adapts various foundational models for the 3D point cloud segmentation task.
Our approach involves making initial predictions of 2D semantic masks using different large vision models.
To generate robust 3D semantic pseudo labels, we introduce a semantic label fusion strategy that effectively combines all the results via voting.
arXiv Detail & Related papers (2023-11-03T15:41:15Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray
casting and Bayesian networks [40.7734793392562]
Reconstructing semantic 3D building models at the level of detail (LoD) 3 is a long-standing challenge.
We present a novel method, called Scan2LoD3, that accurately reconstructs semantic LoD3 building models.
We believe our method can foster the development of probability-driven semantic 3D reconstruction at LoD3.
arXiv Detail & Related papers (2023-05-10T17:01:18Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Elevation Estimation-Driven Building 3D Reconstruction from Single-View
Remote Sensing Imagery [20.001807614214922]
Building 3D reconstruction from remote sensing images has a wide range of applications in smart cities, photogrammetry and other fields.
We propose an efficient DSM estimation-driven reconstruction framework (Building3D) to reconstruct 3D building models from the input single-view remote sensing image.
Our Building3D is rooted in the SFFDE network for building elevation prediction, synchronized with a building extraction network for building masks, and then sequentially performs point cloud reconstruction, surface reconstruction (or CityGML model reconstruction)
arXiv Detail & Related papers (2023-01-11T17:20:30Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models [17.487852393066458]
Existing verification method for point cloud model is time-expensive and computationally unattainable on large networks.
We propose 3DVerifier to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation.
Our approach achieves an orders-of-magnitude improvement in verification efficiency for the large network, and the obtained certified bounds are also significantly tighter than the state-of-the-art verifiers.
arXiv Detail & Related papers (2022-07-15T15:31:16Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Translational Symmetry-Aware Facade Parsing for 3D Building
Reconstruction [11.263458202880038]
In this paper, we present a novel translational symmetry-based approach to improving the deep neural networks.
We propose a novel scheme to fuse anchor-free detection in a single stage network, which enables the efficient training and better convergence.
We employ an off-the-shelf rendering engine like Blender to reconstruct the realistic high-quality 3D models using procedural modeling.
arXiv Detail & Related papers (2021-06-02T03:10:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.