Occlusion-Aware 2D and 3D Centerline Detection for Urban Driving via
Automatic Label Generation
- URL: http://arxiv.org/abs/2311.02044v1
- Date: Fri, 3 Nov 2023 17:20:34 GMT
- Title: Occlusion-Aware 2D and 3D Centerline Detection for Urban Driving via
Automatic Label Generation
- Authors: David Paz, Narayanan E. Ranganatha, Srinidhi K. Srinivas, Yunchao Yao,
Henrik I. Christensen
- Abstract summary: This research work seeks to explore and identify strategies that can determine road topology information in 2D and 3D under highly dynamic urban driving scenarios.
To facilitate this exploration, we introduce a substantial dataset comprising nearly one million automatically labeled data frames.
- Score: 4.921246328739616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This research work seeks to explore and identify strategies that can
determine road topology information in 2D and 3D under highly dynamic urban
driving scenarios. To facilitate this exploration, we introduce a substantial
dataset comprising nearly one million automatically labeled data frames. A key
contribution of our research lies in developing an automatic label-generation
process and an occlusion handling strategy. This strategy is designed to model
a wide range of occlusion scenarios, from mild disruptions to severe blockages.
Furthermore, we present a comprehensive ablation study wherein multiple
centerline detection methods are developed and evaluated. This analysis not
only benchmarks the performance of various approaches but also provides
valuable insights into the interpretability of these methods. Finally, we
demonstrate the practicality of our methods and assess their adaptability
across different sensor configurations, highlighting their versatility and
relevance in real-world scenarios. Our dataset and experimental models are
publicly available.
Related papers
- Distribution Discrepancy and Feature Heterogeneity for Active 3D Object Detection [18.285299184361598]
LiDAR-based 3D object detection is a critical technology for the development of autonomous driving and robotics.
We propose a novel and effective active learning (AL) method called Distribution Discrepancy and Feature Heterogeneity (DDFH)
It simultaneously considers geometric features and model embeddings, assessing information from both the instance-level and frame-level perspectives.
arXiv Detail & Related papers (2024-09-09T08:26:11Z) - Semi-supervised 3D Semantic Scene Completion with 2D Vision Foundation Model Guidance [11.090775523892074]
We introduce a novel semi-supervised framework to alleviate the dependency on densely annotated data.
Our approach leverages 2D foundation models to generate essential 3D scene geometric and semantic cues.
Our method achieves up to 85% of the fully-supervised performance using only 10% labeled data.
arXiv Detail & Related papers (2024-08-21T12:13:18Z) - Collective Perception Datasets for Autonomous Driving: A Comprehensive Review [0.5326090003728084]
This paper provides the first comprehensive review of collective perception datasets in the context of autonomous driving.
The study aims to identify the key criteria of all datasets and to present their strengths, weaknesses, and anomalies.
arXiv Detail & Related papers (2024-05-27T09:08:55Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Dual-Perspective Knowledge Enrichment for Semi-Supervised 3D Object
Detection [55.210991151015534]
We present a novel Dual-Perspective Knowledge Enrichment approach named DPKE for semi-supervised 3D object detection.
Our DPKE enriches the knowledge of limited training data, particularly unlabeled data, from two perspectives: data-perspective and feature-perspective.
arXiv Detail & Related papers (2024-01-10T08:56:07Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Anomaly Detection via Nonlinear Manifold Learning [0.0]
Anomalies are samples that significantly deviate from the rest of the data and their detection plays a major role in building machine learning models.
We introduce a robust, efficient, and interpretable methodology based on nonlinear manifold learning to detect anomalies in unsupervised settings.
arXiv Detail & Related papers (2023-06-15T18:48:10Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Survey and Systematization of 3D Object Detection Models and Methods [3.472931603805115]
We provide a comprehensive survey of recent developments from 2012-2021 in 3D object detection.
We introduce fundamental concepts, focus on a broad range of different approaches that have emerged over the past decade.
We propose a systematization that provides a practical framework for comparing these approaches with the goal of guiding future development, evaluation and application activities.
arXiv Detail & Related papers (2022-01-23T20:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.