Improving Generalization Ability for 3D Object Detection by Learning Sparsity-invariant Features
- URL: http://arxiv.org/abs/2502.02322v1
- Date: Tue, 04 Feb 2025 13:47:02 GMT
- Title: Improving Generalization Ability for 3D Object Detection by Learning Sparsity-invariant Features
- Authors: Hsin-Cheng Lu, Chung-Yi Lin, Winston H. Hsu,
- Abstract summary: We propose a method to improve the generalization ability for 3D object detection on a single domain.
To learn sparsity-invariant features from a single source domain, we selectively subsample the source data to a specific beam.
We also employ the teacher-student framework to align the Bird's Eye View features for different point clouds densities.
- Score: 21.761631081209263
- License:
- Abstract: In autonomous driving, 3D object detection is essential for accurately identifying and tracking objects. Despite the continuous development of various technologies for this task, a significant drawback is observed in most of them-they experience substantial performance degradation when detecting objects in unseen domains. In this paper, we propose a method to improve the generalization ability for 3D object detection on a single domain. We primarily focus on generalizing from a single source domain to target domains with distinct sensor configurations and scene distributions. To learn sparsity-invariant features from a single source domain, we selectively subsample the source data to a specific beam, using confidence scores determined by the current detector to identify the density that holds utmost importance for the detector. Subsequently, we employ the teacher-student framework to align the Bird's Eye View (BEV) features for different point clouds densities. We also utilize feature content alignment (FCA) and graph-based embedding relationship alignment (GERA) to instruct the detector to be domain-agnostic. Extensive experiments demonstrate that our method exhibits superior generalization capabilities compared to other baselines. Furthermore, our approach even outperforms certain domain adaptation methods that can access to the target domain data.
Related papers
- Object Style Diffusion for Generalized Object Detection in Urban Scene [69.04189353993907]
We introduce a novel single-domain object detection generalization method, named GoDiff.
By integrating pseudo-target domain data with source domain data, we diversify the training dataset.
Experimental results demonstrate that our method not only enhances the generalization ability of existing detectors but also functions as a plug-and-play enhancement for other single-domain generalization methods.
arXiv Detail & Related papers (2024-12-18T13:03:00Z) - Domain Generalization of 3D Object Detection by Density-Resampling [14.510085711178217]
Point-cloud-based 3D object detection suffers from performance degradation when encountering data with novel domain gaps.
We propose an SDG method to improve the generalizability of 3D object detection to unseen target domains.
Our work introduces a novel data augmentation method and contributes a new multi-task learning strategy in the methodology.
arXiv Detail & Related papers (2023-11-17T20:01:29Z) - Density-Insensitive Unsupervised Domain Adaption on 3D Object Detection [19.703181080679176]
3D object detection from point clouds is crucial in safety-critical autonomous driving.
We propose a density-insensitive domain adaption framework to address the density-induced domain gap.
Experimental results on three widely adopted 3D object detection datasets demonstrate that our proposed domain adaption method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-19T06:33:07Z) - MS3D: Leveraging Multiple Detectors for Unsupervised Domain Adaptation
in 3D Object Detection [7.489722641968593]
Multi-Source 3D (MS3D) is a new self-training pipeline for unsupervised domain adaptation in 3D object detection.
Our proposed Kernel-Density Estimation (KDE) Box Fusion method fuses box proposals from multiple domains to obtain pseudo-labels.
MS3D exhibits greater robustness to domain shift and produces accurate pseudo-labels over greater distances.
arXiv Detail & Related papers (2023-04-05T13:29:21Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - Frequency Spectrum Augmentation Consistency for Domain Adaptive Object
Detection [107.52026281057343]
We introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations.
In the first stage, we utilize all the original and augmented source data to train an object detector.
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
arXiv Detail & Related papers (2021-12-16T04:07:01Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.