StructuredMesh: 3D Structured Optimization of Fa\c{c}ade Components on
Photogrammetric Mesh Models using Binary Integer Programming
- URL: http://arxiv.org/abs/2306.04184v1
- Date: Wed, 7 Jun 2023 06:40:54 GMT
- Title: StructuredMesh: 3D Structured Optimization of Fa\c{c}ade Components on
Photogrammetric Mesh Models using Binary Integer Programming
- Authors: Libin Wang, Han Hu, Qisen Shang, Bo Xu, Qing Zhu
- Abstract summary: We present StructuredMesh, a novel approach for reconstructing faccade structures conforming to the regularity of buildings within photogrammetric mesh models.
Our method involves capturing multi-view color and depth images of the building model using a virtual camera.
We then utilize the depth image to remap these boxes into 3D space, generating an initial faccade layout.
- Score: 17.985961236568663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The lack of fa\c{c}ade structures in photogrammetric mesh models renders them
inadequate for meeting the demands of intricate applications. Moreover, these
mesh models exhibit irregular surfaces with considerable geometric noise and
texture quality imperfections, making the restoration of structures
challenging. To address these shortcomings, we present StructuredMesh, a novel
approach for reconstructing fa\c{c}ade structures conforming to the regularity
of buildings within photogrammetric mesh models. Our method involves capturing
multi-view color and depth images of the building model using a virtual camera
and employing a deep learning object detection pipeline to semi-automatically
extract the bounding boxes of fa\c{c}ade components such as windows, doors, and
balconies from the color image. We then utilize the depth image to remap these
boxes into 3D space, generating an initial fa\c{c}ade layout. Leveraging
architectural knowledge, we apply binary integer programming (BIP) to optimize
the 3D layout's structure, encompassing the positions, orientations, and sizes
of all components. The refined layout subsequently informs fa\c{c}ade modeling
through instance replacement. We conducted experiments utilizing building mesh
models from three distinct datasets, demonstrating the adaptability,
robustness, and noise resistance of our proposed methodology. Furthermore, our
3D layout evaluation metrics reveal that the optimized layout enhances
precision, recall, and F-score by 6.5%, 4.5%, and 5.5%, respectively, in
comparison to the initial layout.
Related papers
- Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction [2.2954246824369218]
3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis.
We propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues.
arXiv Detail & Related papers (2023-09-06T19:30:22Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - 3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow [61.62796058294777]
Reconstructing 3D shape from a single 2D image is a challenging task.
Most of the previous methods still struggle to extract semantic attributes for 3D reconstruction task.
We propose 3DAttriFlow to disentangle and extract semantic attributes through different semantic levels in the input images.
arXiv Detail & Related papers (2022-03-29T02:03:31Z) - Translational Symmetry-Aware Facade Parsing for 3D Building
Reconstruction [11.263458202880038]
In this paper, we present a novel translational symmetry-based approach to improving the deep neural networks.
We propose a novel scheme to fuse anchor-free detection in a single stage network, which enables the efficient training and better convergence.
We employ an off-the-shelf rendering engine like Blender to reconstruct the realistic high-quality 3D models using procedural modeling.
arXiv Detail & Related papers (2021-06-02T03:10:51Z) - An Effective Loss Function for Generating 3D Models from Single 2D Image
without Rendering [0.0]
Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction.
Currents use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape.
We propose a novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette.
arXiv Detail & Related papers (2021-03-05T00:02:18Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.