Reconstructing Compact Building Models from Point Clouds Using Deep
Implicit Fields
- URL: http://arxiv.org/abs/2112.13142v1
- Date: Fri, 24 Dec 2021 21:32:32 GMT
- Title: Reconstructing Compact Building Models from Point Clouds Using Deep
Implicit Fields
- Authors: Zhaiyu Chen, Seyran Khademi, Hugo Ledoux, Liangliang Nan
- Abstract summary: We present a novel framework for reconstructing compact, watertight, polygonal building models from point clouds.
Experiments on both synthetic and real-world point clouds have demonstrated that, with our neural-guided strategy, high-quality building models can be obtained with significant advantages in fidelity, compactness, and computational efficiency.
- Score: 4.683612295430956
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Three-dimensional (3D) building models play an increasingly pivotal role in
many real-world applications while obtaining a compact representation of
buildings remains an open problem. In this paper, we present a novel framework
for reconstructing compact, watertight, polygonal building models from point
clouds. Our framework comprises three components: (a) a cell complex is
generated via adaptive space partitioning that provides a polyhedral embedding
as the candidate set; (b) an implicit field is learned by a deep neural network
that facilitates building occupancy estimation; (c) a Markov random field is
formulated to extract the outer surface of a building via combinatorial
optimization. We evaluate and compare our method with state-of-the-art methods
in shape reconstruction, surface approximation, and geometry simplification.
Experiments on both synthetic and real-world point clouds have demonstrated
that, with our neural-guided strategy, high-quality building models can be
obtained with significant advantages in fidelity, compactness, and
computational efficiency. Our method shows robustness to noise and insufficient
measurements, and it can directly generalize from synthetic scans to real-world
measurements.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - GEM3D: GEnerative Medial Abstractions for 3D Shape Synthesis [25.594334301684903]
We introduce GEM3D -- a new deep, topology-aware generative model of 3D shapes.
Key ingredient of our method is a neural skeleton-based representation encoding information on both shape topology and geometry.
We demonstrate significantly more faithful surface reconstruction and diverse shape generation results compared to the state-of-the-art.
arXiv Detail & Related papers (2024-02-26T20:00:57Z) - Learning to Generate 3D Representations of Building Roofs Using
Single-View Aerial Imagery [68.3565370706598]
We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image.
Unlike alternative methods that require multiple images of the same object, our approach enables estimating 3D roof meshes using only a single image for predictions.
arXiv Detail & Related papers (2023-03-20T15:47:05Z) - Parametrizing Product Shape Manifolds by Composite Networks [5.772786223242281]
We show that it is possible to learn an efficient neural network approximation for shape spaces with a special product structure.
Our proposed architecture leverages this structure by separately learning approximations for the low-dimensional factors and a subsequent combination.
arXiv Detail & Related papers (2023-02-28T15:31:23Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z) - Automated LoD-2 Model Reconstruction from Very-HighResolution
Satellite-derived Digital Surface Model and Orthophoto [1.2691047660244335]
We propose a model-driven method that reconstructs LoD-2 building models following a "decomposition-optimization-fitting" paradigm.
Our proposed method has addressed a few technical caveats over existing methods, resulting in practically high-quality results.
arXiv Detail & Related papers (2021-09-08T19:03:09Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Sparse-data based 3D surface reconstruction with vector matching [4.471370467116141]
A new model has been proposed which is based on the idea of using normal vector matching combined with a first order and a second order total variation regularizers.
A fast algorithm based on the augmented Lagrangian is also proposed.
arXiv Detail & Related papers (2020-09-28T00:36:49Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.