SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous
Driving
- URL: http://arxiv.org/abs/2302.12966v1
- Date: Sat, 25 Feb 2023 02:59:12 GMT
- Title: SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous
Driving
- Authors: Jiawei Hou, Qi Chen, Yurong Cheng, Guang Chen, Xiangyang Xue, Taiping
Zeng, Jian Pu
- Abstract summary: SUPS is a simulated dataset for underground automatic parking.
It supports multiple tasks with multiple sensors and multiple semantic labels aligned with successive images.
We also evaluate the state-of-the-art SLAM algorithms and perception models on our dataset.
- Score: 41.221988979184665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic underground parking has attracted considerable attention as the
scope of autonomous driving expands. The auto-vehicle is supposed to obtain the
environmental information, track its location, and build a reliable map of the
scenario. Mainstream solutions consist of well-trained neural networks and
simultaneous localization and mapping (SLAM) methods, which need numerous
carefully labeled images and multiple sensor estimations. However, there is a
lack of underground parking scenario datasets with multiple sensors and
well-labeled images that support both SLAM tasks and perception tasks, such as
semantic segmentation and parking slot detection. In this paper, we present
SUPS, a simulated dataset for underground automatic parking, which supports
multiple tasks with multiple sensors and multiple semantic labels aligned with
successive images according to timestamps. We intend to cover the defect of
existing datasets with the variability of environments and the diversity and
accessibility of sensors in the virtual scene. Specifically, the dataset
records frames from four surrounding fisheye cameras, two forward pinhole
cameras, a depth camera, and data from LiDAR, inertial measurement unit (IMU),
GNSS. Pixel-level semantic labels are provided for objects, especially ground
signs such as arrows, parking lines, lanes, and speed bumps. Perception, 3D
reconstruction, depth estimation, and SLAM, and other relative tasks are
supported by our dataset. We also evaluate the state-of-the-art SLAM algorithms
and perception models on our dataset. Finally, we open source our virtual 3D
scene built based on Unity Engine and release our dataset at
https://github.com/jarvishou829/SUPS.
Related papers
- SemanticSpray++: A Multimodal Dataset for Autonomous Driving in Wet Surface Conditions [10.306226508237348]
The SemanticSpray++ dataset provides labels for camera, LiDAR, and radar data of highway-like scenarios in wet surface conditions.
By labeling all three sensor modalities, the dataset offers a comprehensive test bed for analyzing the performance of different perception methods.
arXiv Detail & Related papers (2024-06-14T11:46:48Z) - Neural Rendering based Urban Scene Reconstruction for Autonomous Driving [8.007494499012624]
We propose a multimodal 3D scene reconstruction using a framework combining neural implicit surfaces and radiance fields.
Dense 3D reconstruction has many applications in automated driving including automated annotation validation.
We demonstrate qualitative and quantitative results on challenging automotive scenes.
arXiv Detail & Related papers (2024-02-09T23:20:23Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes [79.18349050238413]
Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
arXiv Detail & Related papers (2022-10-23T23:03:17Z) - Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities [4.4855664250147465]
We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views.
The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes.
arXiv Detail & Related papers (2022-08-30T11:36:07Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and
Interconnected Self-driving [19.66714697653504]
Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving.
The lack of datasets has severely blocked the development of collaborative perception algorithms.
We release DOLPHINS: dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving.
arXiv Detail & Related papers (2022-07-15T17:07:07Z) - Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data [80.14669385741202]
We propose a self-supervised pre-training method for 3D perception models tailored to autonomous driving data.
We leverage the availability of synchronized and calibrated image and Lidar sensors in autonomous driving setups.
Our method does not require any point cloud nor image annotations.
arXiv Detail & Related papers (2022-03-30T12:40:30Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.