City3D: Large-scale Urban Reconstruction from Airborne Point Clouds
- URL: http://arxiv.org/abs/2201.10276v1
- Date: Tue, 25 Jan 2022 12:41:11 GMT
- Title: City3D: Large-scale Urban Reconstruction from Airborne Point Clouds
- Authors: Jin Huang, Jantien Stoter, Ravi Peters, Liangliang Nan
- Abstract summary: We present a fully automatic approach for reconstructing compact 3D building models from large-scale airborne point clouds.
Based on the observation that urban buildings typically consist of planar roofs connected with vertical walls to the ground, we propose an approach to infer the vertical walls directly from the data.
Experiments on various large-scale airborne point clouds have demonstrated that the method is superior to the state-of-the-art methods in terms of reconstruction accuracy and robustness.
- Score: 3.7422375336465037
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a fully automatic approach for reconstructing compact 3D building
models from large-scale airborne point clouds. A major challenge of urban
reconstruction from airborne point clouds lies in that the vertical walls are
typically missing. Based on the observation that urban buildings typically
consist of planar roofs connected with vertical walls to the ground, we propose
an approach to infer the vertical walls directly from the data. With the planar
segments of both roofs and walls, we hypothesize the faces of the building
surface, and the final model is obtained by using an extended
hypothesis-and-selection-based polygonal surface reconstruction framework.
Specifically, we introduce a new energy term to encourage roof preferences and
two additional hard constraints into the optimization step to ensure correct
topology and enhance detail recovery. Experiments on various large-scale
airborne point clouds have demonstrated that the method is superior to the
state-of-the-art methods in terms of reconstruction accuracy and robustness. In
addition, we have generated a new dataset with our method consisting of the
point clouds and 3D models of 20k real-world buildings. We believe this dataset
can stimulate research in urban reconstruction from airborne point clouds and
the use of 3D city models in urban applications.
Related papers
- StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting [85.67616000086232]
StreetSurfGS is first method to employ Gaussian Splatting specifically tailored for scalable urban street scene surface reconstruction.
StreetSurfGS utilizes a planar-based octree representation and segmented training to reduce memory costs, accommodate unique camera characteristics, and ensure scalability.
To address sparse views and multi-scale challenges, we use a dual-step matching strategy that leverages adjacent and long-term information.
arXiv Detail & Related papers (2024-10-06T04:21:59Z) - FRI-Net: Floorplan Reconstruction via Room-wise Implicit Representation [18.157827697752317]
We introduce a novel method called FRI-Net for 2D floorplan reconstruction from 3D point cloud.
By incorporating geometric priors of room layouts in floorplans into our training strategy, the generated room polygons are more geometrically regular.
Our method demonstrates improved performance compared to state-of-the-art methods, validating the effectiveness of our proposed representation for floorplan reconstruction.
arXiv Detail & Related papers (2024-07-15T13:01:44Z) - APC2Mesh: Bridging the gap from occluded building façades to full 3D models [5.113739955215433]
We propose APC2Mesh which integrates point completion into a 3D reconstruction pipeline.
Specifically, we leveraged complete points generated from occluded ones as input to a linearized skip attention-based deformation network for 3D mesh reconstruction.
arXiv Detail & Related papers (2024-04-03T01:29:30Z) - Point2Building: Reconstructing Buildings from Airborne LiDAR Point Clouds [23.897507889025817]
We present a learning-based approach to reconstruct buildings as 3D polygonal meshes from airborne LiDAR point clouds.
Our model learns directly from the point cloud data, thereby reducing error propagation and increasing the fidelity of the reconstruction.
We experimentally validate our method on a collection of airborne LiDAR data of Zurich, Berlin and Tallinn.
arXiv Detail & Related papers (2024-03-04T15:46:50Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models [97.58685709663287]
generative pre-training can boost the performance of fundamental models in 2D vision.
In 3D vision, the over-reliance on Transformer-based backbones and the unordered nature of point clouds have restricted the further development of generative pre-training.
We propose a novel 3D-to-2D generative pre-training method that is adaptable to any point cloud model.
arXiv Detail & Related papers (2023-07-27T16:07:03Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Learning Reconstructability for Drone Aerial Path Planning [51.736344549907265]
We introduce the first learning-based reconstructability predictor to improve view and path planning for large-scale 3D urban scene acquisition using unmanned drones.
In contrast to previous approaches, our method learns a model that explicitly predicts how well a 3D urban scene will be reconstructed from a set of viewpoints.
arXiv Detail & Related papers (2022-09-21T08:10:26Z) - Holistic Parameteric Reconstruction of Building Models from Point Clouds [9.93322840476651]
We propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously.
We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds.
A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive.
The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points.
arXiv Detail & Related papers (2020-05-19T05:42:23Z) - Deep Learning Guided Building Reconstruction from Satellite
Imagery-derived Point Clouds [39.36437891978871]
We present a reliable and effective approach for building model reconstruction from the point clouds generated from satellite images.
Specifically, a deep-learning approach is adopted to distinguish the shape of building roofs in complex and yet noisy scenes.
As the first effort to address the public need of large scale city model generation, the development is deployed as open source software.
arXiv Detail & Related papers (2020-05-19T05:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.