Exploring Diversity-based Active Learning for 3D Object Detection in
Autonomous Driving
- URL: http://arxiv.org/abs/2205.07708v1
- Date: Mon, 16 May 2022 14:21:30 GMT
- Title: Exploring Diversity-based Active Learning for 3D Object Detection in
Autonomous Driving
- Authors: Zhihao Liang, Xun Xu, Shengheng Deng, Lile Cai, Tao Jiang, Kui Jia
- Abstract summary: In this work, we investigate diversity-based active learning (AL) as a potential solution to alleviate the annotation burden.
We propose a novel acquisition function that enforces spatial and temporal diversity in the selected samples.
We demonstrate the effectiveness of the proposed method on the nuScenes dataset and show that it outperforms existing AL strategies significantly.
- Score: 42.803690431227814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D object detection has recently received much attention due to its great
potential in autonomous vehicle (AV). The success of deep learning based object
detectors relies on the availability of large-scale annotated datasets, which
is time-consuming and expensive to compile, especially for 3D bounding box
annotation. In this work, we investigate diversity-based active learning (AL)
as a potential solution to alleviate the annotation burden. Given limited
annotation budget, only the most informative frames and objects are
automatically selected for human to annotate. Technically, we take the
advantage of the multimodal information provided in an AV dataset, and propose
a novel acquisition function that enforces spatial and temporal diversity in
the selected samples. We benchmark the proposed method against other AL
strategies under realistic annotation cost measurement, where the realistic
costs for annotating a frame and a 3D bounding box are both taken into
consideration. We demonstrate the effectiveness of the proposed method on the
nuScenes dataset and show that it outperforms existing AL strategies
significantly.
Related papers
- Find n' Propagate: Open-Vocabulary 3D Object Detection in Urban Environments [67.83787474506073]
We tackle the limitations of current LiDAR-based 3D object detection systems.
We introduce a universal textscFind n' Propagate approach for 3D OV tasks.
We achieve up to a 3.97-fold increase in Average Precision (AP) for novel object classes.
arXiv Detail & Related papers (2024-03-20T12:51:30Z) - The Why, When, and How to Use Active Learning in Large-Data-Driven 3D
Object Detection for Safe Autonomous Driving: An Empirical Exploration [1.2815904071470705]
entropy querying is a promising strategy for selecting data that enhances model learning in resource-constrained environments.
Our findings suggest that entropy querying is a promising strategy for selecting data that enhances model learning in resource-constrained environments.
arXiv Detail & Related papers (2024-01-30T00:14:13Z) - Towards Open World Active Learning for 3D Object Detection [43.242426340854905]
We introduce Open World Active Learning for 3D Object Detection (OWAL-3D)
OWAL-3D aims at selecting a small number of 3D boxes to annotate while maximizing detection performance on both known and unknown classes.
We unify both relational constraints into a simple and effective AL strategy namely OpenCRB, which guides to acquisition of informative point clouds.
arXiv Detail & Related papers (2023-10-16T13:32:53Z) - ReBound: An Open-Source 3D Bounding Box Annotation Tool for Active
Learning [3.1997195262707536]
ReBound is an open-source 3D visualization and dataset re-annotation tool.
We show that ReBound is effective for exploratory data analysis and can facilitate active-learning.
arXiv Detail & Related papers (2023-03-11T00:11:30Z) - Exploring Active 3D Object Detection from a Generalization Perspective [58.597942380989245]
Uncertainty-based active learning policies fail to balance the trade-off between point cloud informativeness and box-level annotation costs.
We propose textscCrb, which hierarchically filters out the point clouds of redundant 3D bounding box labels.
Experiments show that the proposed approach outperforms existing active learning strategies.
arXiv Detail & Related papers (2023-01-23T02:43:03Z) - CMR3D: Contextualized Multi-Stage Refinement for 3D Object Detection [57.44434974289945]
We propose Contextualized Multi-Stage Refinement for 3D Object Detection (CMR3D) framework.
Our framework takes a 3D scene as input and strives to explicitly integrate useful contextual information of the scene.
In addition to 3D object detection, we investigate the effectiveness of our framework for the problem of 3D object counting.
arXiv Detail & Related papers (2022-09-13T05:26:09Z) - Semi-supervised 3D Object Detection via Adaptive Pseudo-Labeling [18.209409027211404]
3D object detection is an important task in computer vision.
Most existing methods require a large number of high-quality 3D annotations, which are expensive to collect.
We propose a novel semi-supervised framework based on pseudo-labeling for outdoor 3D object detection tasks.
arXiv Detail & Related papers (2021-08-15T02:58:43Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.