RELLIS-3D Dataset: Data, Benchmarks and Analysis
- URL: http://arxiv.org/abs/2011.12954v4
- Date: Wed, 25 May 2022 15:11:10 GMT
- Title: RELLIS-3D Dataset: Data, Benchmarks and Analysis
- Authors: Peng Jiang, Philip Osteen, Maggie Wigness, Srikanth Saripalli
- Abstract summary: RELLIS-3D is a multimodal dataset collected in an off-road environment.
The data was collected on the Rellis Campus of Texas A&M University.
- Score: 16.803548871633957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic scene understanding is crucial for robust and safe autonomous
navigation, particularly so in off-road environments. Recent deep learning
advances for 3D semantic segmentation rely heavily on large sets of training
data, however existing autonomy datasets either represent urban environments or
lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal
dataset collected in an off-road environment, which contains annotations for
13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis
Campus of Texas A\&M University and presents challenges to existing algorithms
related to class imbalance and environmental topography. Additionally, we
evaluate the current state-of-the-art deep learning semantic segmentation
models on this dataset. Experimental results show that RELLIS-3D presents
challenges for algorithms designed for segmentation in urban environments. This
novel dataset provides the resources needed by researchers to continue to
develop more advanced algorithms and investigate new research directions to
enhance autonomous navigation in off-road environments. RELLIS-3D is available
at https://github.com/unmannedlab/RELLIS-3D
Related papers
- Are We Ready for Real-Time LiDAR Semantic Segmentation in Autonomous Driving? [42.348499880894686]
Scene semantic segmentation can be achieved by directly integrating 3D spatial data with specialized deep neural networks.
We investigate various 3D semantic segmentation methodologies and analyze their performance and capabilities for resource-constrained inference on embedded NVIDIA Jetson platforms.
arXiv Detail & Related papers (2024-10-10T20:47:33Z) - UdeerLID+: Integrating LiDAR, Image, and Relative Depth with Semi-Supervised [12.440461420762265]
Road segmentation is a critical task for autonomous driving systems.
Our work introduces an innovative approach that integrates LiDAR point cloud data, visual image, and relative depth maps.
One of the primary challenges is the scarcity of large-scale, accurately labeled datasets.
arXiv Detail & Related papers (2024-09-10T03:57:30Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - HUM3DIL: Semi-supervised Multi-modal 3D Human Pose Estimation for
Autonomous Driving [95.42203932627102]
3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians.
Our method efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin.
Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages.
arXiv Detail & Related papers (2022-12-15T11:15:14Z) - IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes [79.18349050238413]
Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
arXiv Detail & Related papers (2022-10-23T23:03:17Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - H3D: Benchmark on Semantic Segmentation of High-Resolution 3D Point
Clouds and textured Meshes from UAV LiDAR and Multi-View-Stereo [4.263987603222371]
This paper introduces a 3D dataset which is unique in three ways.
It depicts the village of Hessigheim (Germany) henceforth referred to as H3D.
It is designed for promoting research in the field of 3D data analysis on one hand and to evaluate and rank emerging approaches.
arXiv Detail & Related papers (2021-02-10T09:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.