Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset
- URL: http://arxiv.org/abs/2012.12996v1
- Date: Wed, 23 Dec 2020 21:48:47 GMT
- Title: Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset
- Authors: G\"ulcan Can, Dario Mantegazza, Gabriele Abbate, S\'ebastien Chappuis,
Alessandro Giusti
- Abstract summary: We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
- Score: 67.44497676652173
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce a new outdoor urban 3D pointcloud dataset, covering a total area
of 2.7 $km^2$, sampled from three Swiss cities with different characteristics.
The dataset is manually annotated for semantic segmentation with per-point
labels, and is built using photogrammetry from images acquired by multirotors
equipped with high-resolution cameras. In contrast to datasets acquired with
ground LiDAR sensors, the resulting point clouds are uniformly dense and
complete, and are useful to disparate applications, including autonomous
driving, gaming and smart city planning. As a benchmark, we report quantitative
results of PointNet++, an established point-based deep 3D semantic segmentation
model; on this model, we additionally study the impact of using different
cities for model generalization.
Related papers
- Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data [80.14669385741202]
We propose a self-supervised pre-training method for 3D perception models tailored to autonomous driving data.
We leverage the availability of synchronized and calibrated image and Lidar sensors in autonomous driving setups.
Our method does not require any point cloud nor image annotations.
arXiv Detail & Related papers (2022-03-30T12:40:30Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - H3D: Benchmark on Semantic Segmentation of High-Resolution 3D Point
Clouds and textured Meshes from UAV LiDAR and Multi-View-Stereo [4.263987603222371]
This paper introduces a 3D dataset which is unique in three ways.
It depicts the village of Hessigheim (Germany) henceforth referred to as H3D.
It is designed for promoting research in the field of 3D data analysis on one hand and to evaluate and rank emerging approaches.
arXiv Detail & Related papers (2021-02-10T09:33:48Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Semantic Segmentation and Data Fusion of Microsoft Bing 3D Cities and
Small UAV-based Photogrammetric Data [0.0]
Authors presented a fully automated data segmentation and object information extraction framework for creating simulation terrain using UAV-based photogrammetric data.
Data quality issues in the aircraft-based photogrammetric data are identified.
Authors also proposed a data registration workflow that utilized the traditional iterative closest point (ICP) with the extracted semantic information.
arXiv Detail & Related papers (2020-08-21T18:56:05Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.