SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds
- URL: http://arxiv.org/abs/2201.04494v1
- Date: Wed, 12 Jan 2022 14:48:11 GMT
- Title: SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds
- Authors: Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, Andrew
Markham
- Abstract summary: We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
- Score: 52.624157840253204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the recent availability and affordability of commercial depth sensors
and 3D scanners, an increasing number of 3D (i.e., RGBD, point cloud) datasets
have been publicized to facilitate research in 3D computer vision. However,
existing datasets either cover relatively small areas or have limited semantic
annotations. Fine-grained understanding of urban-scale 3D scenes is still in
its infancy. In this paper, we introduce SensatUrban, an urban-scale UAV
photogrammetry point cloud dataset consisting of nearly three billion points
collected from three UK cities, covering 7.6 km^2. Each point in the dataset
has been labelled with fine-grained semantic annotations, resulting in a
dataset that is three times the size of the previous existing largest
photogrammetric point cloud dataset. In addition to the more commonly
encountered categories such as road and vegetation, urban-level categories
including rail, bridge, and river are also included in our dataset. Based on
this dataset, we further build a benchmark to evaluate the performance of
state-of-the-art segmentation algorithms. In particular, we provide a
comprehensive analysis and identify several key challenges limiting urban-scale
point cloud understanding. The dataset is available at
http://point-cloud-analysis.cs.ox.ac.uk.
Related papers
- Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - Efficient Urban-scale Point Clouds Segmentation with BEV Projection [0.0]
Most deep point clouds models directly conduct learning on 3D point clouds.
We propose to transfer the 3D point clouds to dense bird's-eye-view projection.
arXiv Detail & Related papers (2021-09-19T06:49:59Z) - H3D: Benchmark on Semantic Segmentation of High-Resolution 3D Point
Clouds and textured Meshes from UAV LiDAR and Multi-View-Stereo [4.263987603222371]
This paper introduces a 3D dataset which is unique in three ways.
It depicts the village of Hessigheim (Germany) henceforth referred to as H3D.
It is designed for promoting research in the field of 3D data analysis on one hand and to evaluate and rank emerging approaches.
arXiv Detail & Related papers (2021-02-10T09:33:48Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z) - Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation
of Urban Roadways [31.619465114439667]
Toronto-3D is a large-scale urban outdoor point cloud dataset acquired by a MLS system in Toronto, Canada for semantic segmentation.
This dataset covers approximately 1 km of point clouds and consists of about 78.3 million points with 8 labeled object classes.
Baseline experiments for semantic segmentation were conducted and the results confirmed the capability of this dataset to train deep learning models effectively.
arXiv Detail & Related papers (2020-03-18T15:45:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.