The NEOLIX Open Dataset for Autonomous Driving
- URL: http://arxiv.org/abs/2011.13528v2
- Date: Thu, 28 Jan 2021 06:41:15 GMT
- Title: The NEOLIX Open Dataset for Autonomous Driving
- Authors: Lichao Wang, Lanxin Lei, Hongli Song, Weibao Wang
- Abstract summary: We present the NEOLIX dataset and its applica-tions in the autonomous driving area.
Our dataset includes about 30,000 frames with point cloud la-bels, and more than 600k 3D bounding boxes withannotations.
- Score: 1.4091801425319965
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the gradual maturity of 5G technology,autonomous driving technology has
attracted moreand more attention among the research commu-nity. Autonomous
driving vehicles rely on the co-operation of artificial intelligence, visual
comput-ing, radar, monitoring equipment and GPS, whichenables computers to
operate motor vehicles auto-matically and safely without human
interference.However, the large-scale dataset for training andsystem evaluation
is still a hot potato in the devel-opment of robust perception models. In this
paper,we present the NEOLIX dataset and its applica-tions in the autonomous
driving area. Our datasetincludes about 30,000 frames with point cloud la-bels,
and more than 600k 3D bounding boxes withannotations. The data collection
covers multipleregions, and various driving conditions, includingday, night,
dawn, dusk and sunny day. In orderto label this complete dataset, we developed
vari-ous tools and algorithms specified for each task tospeed up the labelling
process. It is expected thatour dataset and related algorithms can support
andmotivate researchers for the further developmentof autonomous driving in the
field of computer vi-sion.
Related papers
- Data-Centric Evolution in Autonomous Driving: A Comprehensive Survey of
Big Data System, Data Mining, and Closed-Loop Technologies [16.283613452235976]
Key to surmount the bottleneck lies in data-centric autonomous driving technology.
There is a lack of systematic knowledge and deep understanding regarding how to build efficient data-centric AD technology.
This article will closely focus on reviewing the state-of-the-art data-driven autonomous driving technologies.
arXiv Detail & Related papers (2024-01-23T16:28:30Z) - RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - Open-sourced Data Ecosystem in Autonomous Driving: the Present and Future [130.87142103774752]
This review systematically assesses over seventy open-source autonomous driving datasets.
It offers insights into various aspects, such as the principles underlying the creation of high-quality datasets.
It also delves into the scientific and technical challenges that warrant resolution.
arXiv Detail & Related papers (2023-12-06T10:46:53Z) - HUM3DIL: Semi-supervised Multi-modal 3D Human Pose Estimation for
Autonomous Driving [95.42203932627102]
3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians.
Our method efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin.
Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages.
arXiv Detail & Related papers (2022-12-15T11:15:14Z) - aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception [0.0]
This dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain.
We trained unimodal and multimodal baseline models for 3D object detection.
arXiv Detail & Related papers (2022-11-17T10:19:59Z) - DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and
Interconnected Self-driving [19.66714697653504]
Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving.
The lack of datasets has severely blocked the development of collaborative perception algorithms.
We release DOLPHINS: dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving.
arXiv Detail & Related papers (2022-07-15T17:07:07Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - KITTI-360: A Novel Dataset and Benchmarks for Urban Scene Understanding
in 2D and 3D [67.50776195828242]
KITTI-360 is a suburban driving dataset which comprises richer input modalities, comprehensive semantic instance annotations and accurate localization.
For efficient annotation, we created a tool to label 3D scenes with bounding primitives, resulting in over 150k semantic and instance annotated images and 1B annotated 3D points.
We established benchmarks and baselines for several tasks relevant to mobile perception, encompassing problems from computer vision, graphics, and robotics on the same dataset.
arXiv Detail & Related papers (2021-09-28T00:41:29Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.