MS3D: Leveraging Multiple Detectors for Unsupervised Domain Adaptation
in 3D Object Detection
- URL: http://arxiv.org/abs/2304.02431v3
- Date: Tue, 9 May 2023 01:23:24 GMT
- Title: MS3D: Leveraging Multiple Detectors for Unsupervised Domain Adaptation
in 3D Object Detection
- Authors: Darren Tsai, Julie Stephany Berrio, Mao Shan, Eduardo Nebot and
Stewart Worrall
- Abstract summary: Multi-Source 3D (MS3D) is a new self-training pipeline for unsupervised domain adaptation in 3D object detection.
Our proposed Kernel-Density Estimation (KDE) Box Fusion method fuses box proposals from multiple domains to obtain pseudo-labels.
MS3D exhibits greater robustness to domain shift and produces accurate pseudo-labels over greater distances.
- Score: 7.489722641968593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Multi-Source 3D (MS3D), a new self-training pipeline for
unsupervised domain adaptation in 3D object detection. Despite the remarkable
accuracy of 3D detectors, they often overfit to specific domain biases, leading
to suboptimal performance in various sensor setups and environments. Existing
methods typically focus on adapting a single detector to the target domain,
overlooking the fact that different detectors possess distinct expertise on
different unseen domains. MS3D leverages this by combining different
pre-trained detectors from multiple source domains and incorporating temporal
information to produce high-quality pseudo-labels for fine-tuning. Our proposed
Kernel-Density Estimation (KDE) Box Fusion method fuses box proposals from
multiple domains to obtain pseudo-labels that surpass the performance of the
best source domain detectors. MS3D exhibits greater robustness to domain shift
and produces accurate pseudo-labels over greater distances, making it
well-suited for high-to-low beam domain adaptation and vice versa. Our method
achieved state-of-the-art performance on all evaluated datasets, and we
demonstrate that the pre-trained detector's source dataset has minimal impact
on the fine-tuned result, making MS3D suitable for real-world applications.
Related papers
- DiffuBox: Refining 3D Object Detection with Point Diffusion [74.01759893280774]
We introduce a novel diffusion-based box refinement approach to ensure robust 3D object detection and localization.
We evaluate this approach under various domain adaptation settings, and our results reveal significant improvements across different datasets.
arXiv Detail & Related papers (2024-05-25T03:14:55Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - MS3D++: Ensemble of Experts for Multi-Source Unsupervised Domain
Adaptation in 3D Object Detection [12.005805403222354]
Deploying 3D detectors in unfamiliar domains has been demonstrated to result in a significant 70-90% drop in detection rate.
We introduce MS3D++, a self-training framework for multi-source unsupervised domain adaptation in 3D object detection.
MS3D++ generates high-quality pseudo-labels, allowing 3D detectors to achieve high performance on a range of lidar types.
arXiv Detail & Related papers (2023-08-11T07:56:10Z) - Density-Insensitive Unsupervised Domain Adaption on 3D Object Detection [19.703181080679176]
3D object detection from point clouds is crucial in safety-critical autonomous driving.
We propose a density-insensitive domain adaption framework to address the density-induced domain gap.
Experimental results on three widely adopted 3D object detection datasets demonstrate that our proposed domain adaption method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-19T06:33:07Z) - SSDA3D: Semi-supervised Domain Adaptation for 3D Object Detection from
Point Cloud [125.9472454212909]
We present a novel Semi-Supervised Domain Adaptation method for 3D object detection (SSDA3D)
SSDA3D includes an Inter-domain Adaptation stage and an Intra-domain Generalization stage.
Experiments show that, with only 10% labeled target data, our SSDA3D can surpass the fully-supervised oracle model with 100% target label.
arXiv Detail & Related papers (2022-12-06T09:32:44Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.