Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation
of Urban Roadways
- URL: http://arxiv.org/abs/2003.08284v3
- Date: Thu, 16 Apr 2020 14:48:42 GMT
- Title: Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation
of Urban Roadways
- Authors: Weikai Tan, Nannan Qin, Lingfei Ma, Ying Li, Jing Du, Guorong Cai, Ke
Yang, Jonathan Li
- Abstract summary: Toronto-3D is a large-scale urban outdoor point cloud dataset acquired by a MLS system in Toronto, Canada for semantic segmentation.
This dataset covers approximately 1 km of point clouds and consists of about 78.3 million points with 8 labeled object classes.
Baseline experiments for semantic segmentation were conducted and the results confirmed the capability of this dataset to train deep learning models effectively.
- Score: 31.619465114439667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation of large-scale outdoor point clouds is essential for
urban scene understanding in various applications, especially autonomous
driving and urban high-definition (HD) mapping. With rapid developments of
mobile laser scanning (MLS) systems, massive point clouds are available for
scene understanding, but publicly accessible large-scale labeled datasets,
which are essential for developing learning-based methods, are still limited.
This paper introduces Toronto-3D, a large-scale urban outdoor point cloud
dataset acquired by a MLS system in Toronto, Canada for semantic segmentation.
This dataset covers approximately 1 km of point clouds and consists of about
78.3 million points with 8 labeled object classes. Baseline experiments for
semantic segmentation were conducted and the results confirmed the capability
of this dataset to train deep learning models effectively. Toronto-3D is
released to encourage new research, and the labels will be improved and updated
with feedback from the research community.
Related papers
- MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations [55.022519020409405]
This paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan.
The resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks.
arXiv Detail & Related papers (2024-06-13T17:59:30Z) - FRACTAL: An Ultra-Large-Scale Aerial Lidar Dataset for 3D Semantic Segmentation of Diverse Landscapes [0.0]
We present an ultra-large-scale aerial Lidar dataset made of 100,000 dense point clouds with high quality labels for 7 semantic classes.
We describe the data collection, annotation, and curation process of the dataset.
We provide baseline semantic segmentation results using a state of the art 3D point cloud classification model.
arXiv Detail & Related papers (2024-05-07T19:37:22Z) - Point Cloud Segmentation Using Transfer Learning with RandLA-Net: A Case
Study on Urban Areas [0.5242869847419834]
This paper presents the application of RandLA-Net, a state-of-the-art neural network architecture, for the 3D segmentation of large-scale point cloud data in urban areas.
The study focuses on three major Chinese cities, namely Chengdu, Jiaoda, and Shenzhen, leveraging their unique characteristics to enhance segmentation performance.
arXiv Detail & Related papers (2023-12-19T06:13:58Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.