InLUT3D: Challenging real indoor dataset for point cloud analysis
- URL: http://arxiv.org/abs/2408.03338v1
- Date: Mon, 22 Jul 2024 09:56:31 GMT
- Title: InLUT3D: Challenging real indoor dataset for point cloud analysis
- Authors: Jakub Walczak,
- Abstract summary: In this paper, we introduce the InLUT3D point cloud dataset, a comprehensive resource designed to advance the field of scene understanding in indoor environments.
The dataset covers diverse spaces within the W7 faculty buildings of Lodz University of Technology, characterised by high-resolution laser-based point clouds and manual labelling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we introduce the InLUT3D point cloud dataset, a comprehensive resource designed to advance the field of scene understanding in indoor environments. The dataset covers diverse spaces within the W7 faculty buildings of Lodz University of Technology, characterised by high-resolution laser-based point clouds and manual labelling. Alongside the dataset, we propose metrics and benchmarking guidelines essential for ensuring trustworthy and reproducible results in algorithm evaluation. We anticipate that the introduction of the InLUT3D dataset and its associated benchmarks will catalyse future advancements in 3D scene understanding, facilitating methodological rigour and inspiring new approaches in the field.
Related papers
- BelHouse3D: A Benchmark Dataset for Assessing Occlusion Robustness in 3D Point Cloud Semantic Segmentation [2.446672595462589]
We introduce the BelHouse3D dataset, a new synthetic point cloud dataset designed for 3D indoor scene semantic segmentation.
This dataset is constructed using real-world references from 32 houses in Belgium, ensuring that the synthetic data closely aligns with real-world conditions.
arXiv Detail & Related papers (2024-11-20T12:09:43Z) - MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations [55.022519020409405]
This paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan.
The resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks.
arXiv Detail & Related papers (2024-06-13T17:59:30Z) - Building-PCC: Building Point Cloud Completion Benchmarks [0.0]
Lidar technology has been widely applied in the collection of 3D data in urban scenes.
The collected point cloud data often exhibit incompleteness due to factors such as occlusion, signal absorption, and specular reflection.
This paper explores the application of point cloud completion technologies in processing these incomplete data.
arXiv Detail & Related papers (2024-04-24T04:50:50Z) - A Survey of Label-Efficient Deep Learning for 3D Point Clouds [109.07889215814589]
This paper presents the first comprehensive survey of label-efficient learning of point clouds.
We propose a taxonomy that organizes label-efficient learning methods based on the data prerequisites provided by different types of labels.
For each approach, we outline the problem setup and provide an extensive literature review that showcases relevant progress and challenges.
arXiv Detail & Related papers (2023-05-31T12:54:51Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - Scalable Scene Flow from Point Clouds in the Real World [30.437100097997245]
We introduce a new large scale benchmark for scene flow based on the Open dataset.
We show how previous works were bounded based on the amount of real LiDAR data available.
We introduce the model architecture FastFlow3D that provides real time inference on the full point cloud.
arXiv Detail & Related papers (2021-03-01T20:56:05Z) - RELLIS-3D Dataset: Data, Benchmarks and Analysis [16.803548871633957]
RELLIS-3D is a multimodal dataset collected in an off-road environment.
The data was collected on the Rellis Campus of Texas A&M University.
arXiv Detail & Related papers (2020-11-17T18:28:01Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.