WLST: Weak Labels Guided Self-training for Weakly-supervised Domain
Adaptation on 3D Object Detection
- URL: http://arxiv.org/abs/2310.03821v2
- Date: Thu, 8 Feb 2024 03:36:46 GMT
- Title: WLST: Weak Labels Guided Self-training for Weakly-supervised Domain
Adaptation on 3D Object Detection
- Authors: Tsung-Lin Tsou, Tsung-Han Wu, and Winston H. Hsu
- Abstract summary: weakly-supervised domain adaptation (WDA) is an underexplored yet practical task that only requires few labeling effort on the target domain.
We propose a general weak labels guided self-training framework, WLST, designed for WDA on 3D object detection.
Our method is able to generate more robust and consistent pseudo labels that would benefit the training process on the target domain.
- Score: 22.835487211419483
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the field of domain adaptation (DA) on 3D object detection, most of the
work is dedicated to unsupervised domain adaptation (UDA). Yet, without any
target annotations, the performance gap between the UDA approaches and the
fully-supervised approach is still noticeable, which is impractical for
real-world applications. On the other hand, weakly-supervised domain adaptation
(WDA) is an underexplored yet practical task that only requires few labeling
effort on the target domain. To improve the DA performance in a cost-effective
way, we propose a general weak labels guided self-training framework, WLST,
designed for WDA on 3D object detection. By incorporating autolabeler, which
can generate 3D pseudo labels from 2D bounding boxes, into the existing
self-training pipeline, our method is able to generate more robust and
consistent pseudo labels that would benefit the training process on the target
domain. Extensive experiments demonstrate the effectiveness, robustness, and
detector-agnosticism of our WLST framework. Notably, it outperforms previous
state-of-the-art methods on all evaluation tasks.
Related papers
- STAL3D: Unsupervised Domain Adaptation for 3D Object Detection via Collaborating Self-Training and Adversarial Learning [21.063779140059157]
Existing 3D object detection suffers from expensive annotation costs and poor transferability to unknown data due to the domain gap.
We propose a novel unsupervised domain adaptation framework for 3D object detection via collaborating ST and AL, dubbed as STAL3D, unleashing the complementary advantages of pseudo labels and feature distribution alignment.
arXiv Detail & Related papers (2024-06-27T17:43:35Z) - Syn-to-Real Unsupervised Domain Adaptation for Indoor 3D Object Detection [50.448520056844885]
We propose a novel framework for syn-to-real unsupervised domain adaptation in indoor 3D object detection.
Our adaptation results from synthetic dataset 3D-FRONT to real-world datasets ScanNetV2 and SUN RGB-D demonstrate remarkable mAP25 improvements of 9.7% and 9.1% over Source-Only baselines.
arXiv Detail & Related papers (2024-06-17T08:18:41Z) - Revisiting Domain-Adaptive 3D Object Detection by Reliable, Diverse and
Class-balanced Pseudo-Labeling [38.07637524378327]
Unsupervised domain adaptation (DA) with the aid of pseudo labeling techniques has emerged as a crucial approach for domain-adaptive 3D object detection.
Existing DA methods suffer from a substantial drop in performance when applied to a multi-class training setting.
We propose a novel ReDB framework tailored for learning to detect all classes at once.
arXiv Detail & Related papers (2023-07-16T04:34:11Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose
Estimation [84.16372642822495]
We propose an unsupervised domain adaptation (UDA) for category-level object pose estimation, called textbfUDA-COPE.
Inspired by the recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain labels.
arXiv Detail & Related papers (2021-11-24T16:00:48Z) - ST3D++: Denoised Self-training for Unsupervised Domain Adaptation on 3D
Object Detection [78.71826145162092]
We present a self-training method, named ST3D++, with a holistic pseudo label denoising pipeline for unsupervised domain adaptation on 3D object detection.
We equip the pseudo label generation process with a hybrid quality-aware triplet memory to improve the quality and stability of generated pseudo labels.
In the model training stage, we propose a source data assisted training strategy and a curriculum data augmentation policy.
arXiv Detail & Related papers (2021-08-15T07:49:06Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.