MLS2LoD3: Refining low LoDs building models with MLS point clouds to
reconstruct semantic LoD3 building models
- URL: http://arxiv.org/abs/2402.06288v1
- Date: Fri, 9 Feb 2024 09:56:23 GMT
- Title: MLS2LoD3: Refining low LoDs building models with MLS point clouds to
reconstruct semantic LoD3 building models
- Authors: Olaf Wysocki, Ludwig Hoegner, Uwe Stilla
- Abstract summary: We introduce a novel refinement strategy enabling LoD3 reconstruction by leveraging the ubiquity of lower LoD building models and the accuracy of MLS point clouds.
We present guidelines for reconstructing LoD3 facade elements and their embedding into the CityGML standard model.
- Score: 3.2732273647357446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although highly-detailed LoD3 building models reveal great potential in
various applications, they have yet to be available. The primary challenges in
creating such models concern not only automatic detection and reconstruction
but also standard-consistent modeling. In this paper, we introduce a novel
refinement strategy enabling LoD3 reconstruction by leveraging the ubiquity of
lower LoD building models and the accuracy of MLS point clouds. Such a strategy
promises at-scale LoD3 reconstruction and unlocks LoD3 applications, which we
also describe and illustrate in this paper. Additionally, we present guidelines
for reconstructing LoD3 facade elements and their embedding into the CityGML
standard model, disseminating gained knowledge to academics and professionals.
We believe that our method can foster development of LoD3 reconstruction
algorithms and subsequently enable their wider adoption.
Related papers
- Texture2LoD3: Enabling LoD3 Building Reconstruction With Panoramic Images [0.0]
In Texture2LoD3, we introduce a novel method leveraging the ubiquity of 3D building model priors and panoramic street-level images.
We experimentally demonstrate that our method leads to improved facade segmentation accuracy by 11%.
We believe that Texture2LoD3 can scale the adoption of LoD3 models, opening applications in estimating building solar potential or enhancing autonomous driving simulations.
arXiv Detail & Related papers (2025-04-07T16:40:16Z) - Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models [65.90387371072413]
We introduce Difix3D+, a novel pipeline designed to enhance 3D reconstruction and novel-view synthesis.
At the core of our approach is Difix, a single-step image diffusion model trained to enhance and remove artifacts in rendered novel views.
arXiv Detail & Related papers (2025-03-03T17:58:33Z) - 3D-MoE: A Mixture-of-Experts Multi-modal LLM for 3D Vision and Pose Diffusion via Rectified Flow [69.94527569577295]
3D vision and spatial reasoning have long been recognized as preferable for accurately perceiving our three-dimensional world.
Due to the difficulties in collecting high-quality 3D data, research in this area has only recently gained momentum.
We propose converting existing densely activated LLMs into mixture-of-experts (MoE) models, which have proven effective for multi-modal data processing.
arXiv Detail & Related papers (2025-01-28T04:31:19Z) - Taming Feed-forward Reconstruction Models as Latent Encoders for 3D Generative Models [7.485139478358133]
Recent AI-based 3D content creation has largely evolved along two paths: feed-forward image-to-3D reconstruction approaches and 3D generative models trained with 2D or 3D supervision.
We show that existing feed-forward reconstruction methods can serve as effective latent encoders for training 3D generative models, thereby bridging these two paradigms.
arXiv Detail & Related papers (2024-12-31T21:23:08Z) - Proc-GS: Procedural Building Generation for City Assembly with 3D Gaussians [65.09942210464747]
Building asset creation is labor-intensive and requires specialized skills to develop design rules.
Recent generative models for building creation often overlook these patterns, leading to low visual fidelity and limited scalability.
By manipulating procedural code, we can streamline this process and generate an infinite variety of buildings.
arXiv Detail & Related papers (2024-12-10T16:45:32Z) - LoD-Loc: Aerial Visual Localization using LoD 3D Map with Neural Wireframe Alignment [16.942854458136633]
We propose a new method for visual localization in complex 3D representations.
Unlike existing localization algorithms, we estimate the pose of an Unmanned Vehicle (UAV) using a LevelDetail (LoD) 3D map.
arXiv Detail & Related papers (2024-10-16T06:09:27Z) - Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion [59.00571588016896]
In 3D modeling, designers often use an existing 3D model as a reference to create new ones.
This practice has inspired the development of Phidias, a novel generative model that uses diffusion for reference-augmented 3D generation.
arXiv Detail & Related papers (2024-09-17T17:59:33Z) - DiffTF++: 3D-aware Diffusion Transformer for Large-Vocabulary 3D Generation [53.20147419879056]
We introduce a diffusion-based feed-forward framework to address challenges with a single model.
Building upon our 3D-aware Diffusion model with TransFormer, we propose a stronger version for 3D generation, i.e., DiffTF++.
Experiments on ShapeNet and OmniObject3D convincingly demonstrate the effectiveness of our proposed modules.
arXiv Detail & Related papers (2024-05-13T17:59:51Z) - L3GO: Language Agents with Chain-of-3D-Thoughts for Generating
Unconventional Objects [53.4874127399702]
We propose a language agent with chain-of-3D-thoughts (L3GO), an inference-time approach that can reason about part-based 3D mesh generation.
We develop a new benchmark, Unconventionally Feasible Objects (UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender.
Our approach surpasses the standard GPT-4 and other language agents for 3D mesh generation on ShapeNet.
arXiv Detail & Related papers (2024-02-14T09:51:05Z) - UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation [101.2317840114147]
We present UniDream, a text-to-3D generation framework by incorporating unified diffusion priors.
Our approach consists of three main components: (1) a dual-phase training process to get albedo-normal aligned multi-view diffusion and reconstruction models, (2) a progressive generation procedure for geometry and albedo-textures based on Score Distillation Sample (SDS) using the trained reconstruction and diffusion models, and (3) an innovative application of SDS for finalizing PBR generation while keeping a fixed albedo based on Stable Diffusion model.
arXiv Detail & Related papers (2023-12-14T09:07:37Z) - Unsupervised Roofline Extraction from True Orthophotos for LoD2 Building
Model Reconstruction [0.0]
This paper presents a method for extracting rooflines from true orthophotos using line detection for the reconstruction of building models at the LoD2 level.
The method is superior to existing plane detection-based methods and state-of-the-art deep learning methods in terms of the accuracy and completeness of the reconstructed building.
arXiv Detail & Related papers (2023-10-02T10:23:08Z) - Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray
casting and Bayesian networks [40.7734793392562]
Reconstructing semantic 3D building models at the level of detail (LoD) 3 is a long-standing challenge.
We present a novel method, called Scan2LoD3, that accurately reconstructs semantic LoD3 building models.
We believe our method can foster the development of probability-driven semantic 3D reconstruction at LoD3.
arXiv Detail & Related papers (2023-05-10T17:01:18Z) - Anything-3D: Towards Single-view Anything Reconstruction in the Wild [61.090129285205805]
We introduce Anything-3D, a methodical framework that ingeniously combines a series of visual-language models and the Segment-Anything object segmentation model.
Our approach employs a BLIP model to generate textural descriptions, utilize the Segment-Anything model for the effective extraction of objects of interest, and leverages a text-to-image diffusion model to lift object into a neural radiance field.
arXiv Detail & Related papers (2023-04-19T16:39:51Z) - Combining visibility analysis and deep learning for refinement of
semantic 3D building models by conflict classification [3.2662392450935416]
We propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features.
In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels.
The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate faccade openings, which are reconstructed using a 3D model library.
arXiv Detail & Related papers (2023-03-10T16:01:30Z) - Elevation Estimation-Driven Building 3D Reconstruction from Single-View
Remote Sensing Imagery [20.001807614214922]
Building 3D reconstruction from remote sensing images has a wide range of applications in smart cities, photogrammetry and other fields.
We propose an efficient DSM estimation-driven reconstruction framework (Building3D) to reconstruct 3D building models from the input single-view remote sensing image.
Our Building3D is rooted in the SFFDE network for building elevation prediction, synchronized with a building extraction network for building masks, and then sequentially performs point cloud reconstruction, surface reconstruction (or CityGML model reconstruction)
arXiv Detail & Related papers (2023-01-11T17:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.