IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes
- URL: http://arxiv.org/abs/2210.12878v1
- Date: Sun, 23 Oct 2022 23:03:17 GMT
- Title: IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes
- Authors: Shubham Dokania, A.H. Abdul Hafez, Anbumani Subramanian, Manmohan
Chandraker, C.V. Jawahar
- Abstract summary: Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
- Score: 79.18349050238413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous driving and assistance systems rely on annotated data from traffic
and road scenarios to model and learn the various object relations in complex
real-world scenarios. Preparation and training of deploy-able deep learning
architectures require the models to be suited to different traffic scenarios
and adapt to different situations. Currently, existing datasets, while
large-scale, lack such diversities and are geographically biased towards mainly
developed cities. An unstructured and complex driving layout found in several
developing countries such as India poses a challenge to these models due to the
sheer degree of variations in the object types, densities, and locations. To
facilitate better research toward accommodating such scenarios, we build a new
dataset, IDD-3D, which consists of multi-modal data from multiple cameras and
LiDAR sensors with 12k annotated driving LiDAR frames across various traffic
scenarios. We discuss the need for this dataset through statistical comparisons
with existing datasets and highlight benchmarks on standard 3D object detection
and tracking tasks in complex layouts. Code and data available at
https://github.com/shubham1810/idd3d_kit.git
Related papers
- MDT3D: Multi-Dataset Training for LiDAR 3D Object Detection
Generalization [3.8243923744440926]
3D object detection models trained on a source dataset with a specific point distribution have shown difficulties in generalizing to unseen datasets.
We leverage the information available from several annotated source datasets with our Multi-Dataset Training for 3D Object Detection (MDT3D) method.
We show how we managed the mix of datasets during training and finally introduce a new cross-dataset augmentation method: cross-dataset object injection.
arXiv Detail & Related papers (2023-08-02T08:20:00Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and
Interconnected Self-driving [19.66714697653504]
Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving.
The lack of datasets has severely blocked the development of collaborative perception algorithms.
We release DOLPHINS: dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving.
arXiv Detail & Related papers (2022-07-15T17:07:07Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - RELLIS-3D Dataset: Data, Benchmarks and Analysis [16.803548871633957]
RELLIS-3D is a multimodal dataset collected in an off-road environment.
The data was collected on the Rellis Campus of Texas A&M University.
arXiv Detail & Related papers (2020-11-17T18:28:01Z) - Train in Germany, Test in The USA: Making 3D Object Detectors Generalize [59.455225176042404]
deep learning has substantially improved the 3D object detection accuracy for LiDAR and stereo camera data alike.
Most datasets for autonomous driving are collected within a narrow subset of cities within one country.
In this paper we consider the task of adapting 3D object detectors from one dataset to another.
arXiv Detail & Related papers (2020-05-17T00:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.