RefRec: Pseudo-labels Refinement via Shape Reconstruction for
Unsupervised 3D Domain Adaptation
- URL: http://arxiv.org/abs/2110.11036v1
- Date: Thu, 21 Oct 2021 10:24:05 GMT
- Title: RefRec: Pseudo-labels Refinement via Shape Reconstruction for
Unsupervised 3D Domain Adaptation
- Authors: Adriano Cardace, Riccardo Spezialetti, Pierluigi Zama Ramirez, Samuele
Salti, Luigi Di Stefano
- Abstract summary: RefRec is the first approach to investigate pseudo-labels and self-training in UDA for point clouds.
We present two main innovations to make self-training effective on 3D data.
- Score: 23.275638060852067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) for point cloud classification is an
emerging research problem with relevant practical motivations. Reliance on
multi-task learning to align features across domains has been the standard way
to tackle it. In this paper, we take a different path and propose RefRec, the
first approach to investigate pseudo-labels and self-training in UDA for point
clouds. We present two main innovations to make self-training effective on 3D
data: i) refinement of noisy pseudo-labels by matching shape descriptors that
are learned by the unsupervised task of shape reconstruction on both domains;
ii) a novel self-training protocol that learns domain-specific decision
boundaries and reduces the negative impact of mislabelled target samples and
in-domain intra-class variability. RefRec sets the new state of the art in both
standard benchmarks used to test UDA for point cloud classification, showcasing
the effectiveness of self-training for this important problem.
Related papers
- How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation? [23.454153602068786]
We evaluate the utility of Continued Pre-Training (CPT) for generative UDA.
Our findings suggest that a implicitly learns the downstream task while predicting masked words informative to that task.
arXiv Detail & Related papers (2024-01-31T00:15:34Z) - Unsupervised Domain Adaptation for Semantic Segmentation with Pseudo
Label Self-Refinement [9.69089112870202]
We propose an auxiliary pseudo-label refinement network (PRN) for online refining of the pseudo labels and also localizing the pixels whose predicted labels are likely to be noisy.
We evaluate our approach on benchmark datasets with three different domain shifts, and our approach consistently performs significantly better than the previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-25T20:31:07Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Revisiting Domain-Adaptive 3D Object Detection by Reliable, Diverse and
Class-balanced Pseudo-Labeling [38.07637524378327]
Unsupervised domain adaptation (DA) with the aid of pseudo labeling techniques has emerged as a crucial approach for domain-adaptive 3D object detection.
Existing DA methods suffer from a substantial drop in performance when applied to a multi-class training setting.
We propose a novel ReDB framework tailored for learning to detect all classes at once.
arXiv Detail & Related papers (2023-07-16T04:34:11Z) - Self-Distillation for Unsupervised 3D Domain Adaptation [15.660045080090711]
We propose an Unsupervised Domain Adaptation (UDA) method for point cloud classification in 3D vision.
In this work, we focus on obtaining a discriminative feature space for the target domain enforcing consistency between a point cloud and its augmented version.
We propose a novel iterative self-training methodology that exploits Graph Neural Networks in the UDA context to refine pseudo-labels.
arXiv Detail & Related papers (2022-10-15T08:37:02Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose
Estimation [84.16372642822495]
We propose an unsupervised domain adaptation (UDA) for category-level object pose estimation, called textbfUDA-COPE.
Inspired by the recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain labels.
arXiv Detail & Related papers (2021-11-24T16:00:48Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Cycle Self-Training for Domain Adaptation [85.14659717421533]
Cycle Self-Training (CST) is a principled self-training algorithm that enforces pseudo-labels to generalize across domains.
CST recovers target ground truth, while both invariant feature learning and vanilla self-training fail.
Empirical results indicate that CST significantly improves over prior state-of-the-arts in standard UDA benchmarks.
arXiv Detail & Related papers (2021-03-05T10:04:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.