Building3D: An Urban-Scale Dataset and Benchmarks for Learning Roof
Structures from Point Clouds
- URL: http://arxiv.org/abs/2307.11914v1
- Date: Fri, 21 Jul 2023 21:38:57 GMT
- Title: Building3D: An Urban-Scale Dataset and Benchmarks for Learning Roof
Structures from Point Clouds
- Authors: Ruisheng Wang, Shangfeng Huang and Hongxin Yang
- Abstract summary: Existing datasets for 3D modeling mainly focus on common objects such as furniture or cars.
We present a urban-scale dataset consisting of more than 160 thousands buildings along with corresponding point clouds, mesh and wire-frame models, covering 16 cities in Estonia about 998 Km2.
Experimental results indicate that Building3D has challenges of high intra-class variance, data imbalance and large-scale noises.
- Score: 4.38301148531795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Urban modeling from LiDAR point clouds is an important topic in computer
vision, computer graphics, photogrammetry and remote sensing. 3D city models
have found a wide range of applications in smart cities, autonomous navigation,
urban planning and mapping etc. However, existing datasets for 3D modeling
mainly focus on common objects such as furniture or cars. Lack of building
datasets has become a major obstacle for applying deep learning technology to
specific domains such as urban modeling. In this paper, we present a
urban-scale dataset consisting of more than 160 thousands buildings along with
corresponding point clouds, mesh and wire-frame models, covering 16 cities in
Estonia about 998 Km2. We extensively evaluate performance of state-of-the-art
algorithms including handcrafted and deep feature based methods. Experimental
results indicate that Building3D has challenges of high intra-class variance,
data imbalance and large-scale noises. The Building3D is the first and largest
urban-scale building modeling benchmark, allowing a comparison of supervised
and self-supervised learning methods. We believe that our Building3D will
facilitate future research on urban modeling, aerial path planning, mesh
simplification, and semantic/part segmentation etc.
Related papers
- MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering
and Beyond [69.37319723095746]
We build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches.
We develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities.
The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size $28km2$.
arXiv Detail & Related papers (2023-09-28T16:06:02Z) - UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building
Instance Segmentation [50.52615875873055]
UrbanBIS comprises six real urban scenes, with 2.5 billion points, covering a vast area of 10.78 square kilometers.
UrbanBIS provides semantic-level annotations on a rich set of urban objects, including buildings, vehicles, vegetation, roads, and bridges.
UrbanBIS is the first 3D dataset that introduces fine-grained building sub-categories.
arXiv Detail & Related papers (2023-05-04T08:01:38Z) - Objaverse: A Universe of Annotated 3D Objects [53.2537614157313]
We present averse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive tags, captions and animations.
We demonstrate the large potential of averse 3D models via four applications: training diverse 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied vision models, and creating a new benchmark for robustness analysis of vision models.
arXiv Detail & Related papers (2022-12-15T18:56:53Z) - sat2pc: Estimating Point Cloud of Building Roofs from 2D Satellite
Images [1.8884278918443564]
We propose sat2pc, a deep learning architecture that predicts the point of a building roof from a single 2D satellite image.
Our results show that sat2pc was able to outperform existing baselines by at least 18.6%.
arXiv Detail & Related papers (2022-05-25T03:24:40Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - BuildingNet: Learning to Label 3D Buildings [19.641000866952815]
BuildingNet: (a) large-scale 3D building models whose exteriors consistently labeled, (b) a neural network that labels building analyzing and structural relations of their geometric primitives.
The dataset covers categories, such as houses, churches, skyscrapers, town halls and castles.
arXiv Detail & Related papers (2021-10-11T01:45:26Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - HoliCity: A City-Scale Data Platform for Learning Holistic 3D Structures [39.2984574045825]
This dataset has 6,300 real-world panoramas that are accurately aligned with a CAD model downtown London with an area of 20 km2 times.
The ultimate goal of this dataset is to support real applications for city reconstruction, mapping, and augmented reality.
arXiv Detail & Related papers (2020-08-07T17:34:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.