Self-Distillation for Unsupervised 3D Domain Adaptation
- URL: http://arxiv.org/abs/2210.08226v1
- Date: Sat, 15 Oct 2022 08:37:02 GMT
- Title: Self-Distillation for Unsupervised 3D Domain Adaptation
- Authors: Adriano Cardace, Riccardo Spezialetti, Pierluigi Zama Ramirez, Samuele
Salti, Luigi Di Stefano
- Abstract summary: We propose an Unsupervised Domain Adaptation (UDA) method for point cloud classification in 3D vision.
In this work, we focus on obtaining a discriminative feature space for the target domain enforcing consistency between a point cloud and its augmented version.
We propose a novel iterative self-training methodology that exploits Graph Neural Networks in the UDA context to refine pseudo-labels.
- Score: 15.660045080090711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point cloud classification is a popular task in 3D vision. However, previous
works, usually assume that point clouds at test time are obtained with the same
procedure or sensor as those at training time. Unsupervised Domain Adaptation
(UDA) instead, breaks this assumption and tries to solve the task on an
unlabeled target domain, leveraging only on a supervised source domain. For
point cloud classification, recent UDA methods try to align features across
domains via auxiliary tasks such as point cloud reconstruction, which however
do not optimize the discriminative power in the target domain in feature space.
In contrast, in this work, we focus on obtaining a discriminative feature space
for the target domain enforcing consistency between a point cloud and its
augmented version. We then propose a novel iterative self-training methodology
that exploits Graph Neural Networks in the UDA context to refine pseudo-labels.
We perform extensive experiments and set the new state-of-the-art in standard
UDA benchmarks for point cloud classification. Finally, we show how our
approach can be extended to more complex tasks such as part segmentation.
Related papers
- T-UDA: Temporal Unsupervised Domain Adaptation in Sequential Point
Clouds [2.5291108878852864]
unsupervised domain adaptation (UDA) methods adapt models trained on one (source) domain with annotations available to another (target) domain for which only unannotated data are available.
We introduce a novel domain adaptation method that leverages the best of both trends. Dubbed T-UDA for "temporal UDA", such a combination yields massive performance gains for the task of 3D semantic segmentation of driving scenes.
arXiv Detail & Related papers (2023-09-15T10:47:12Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - SSDA3D: Semi-supervised Domain Adaptation for 3D Object Detection from
Point Cloud [125.9472454212909]
We present a novel Semi-Supervised Domain Adaptation method for 3D object detection (SSDA3D)
SSDA3D includes an Inter-domain Adaptation stage and an Intra-domain Generalization stage.
Experiments show that, with only 10% labeled target data, our SSDA3D can surpass the fully-supervised oracle model with 100% target label.
arXiv Detail & Related papers (2022-12-06T09:32:44Z) - Domain Adaptation on Point Clouds via Geometry-Aware Implicits [14.404842571470061]
A popular geometric representation, point clouds have attracted much attention in 3D vision, leading to many applications in autonomous driving and robotics.
One important yet unsolved issue for learning on point cloud is that point clouds of the same object can have significant geometric variations if generated using different procedures or captured using different sensors.
A typical technique to reduce the domain gap is to perform adversarial training so that point clouds in the feature space can align.
Here we propose a simple yet effective method for unsupervised domain adaptation on point clouds by employing a self-supervised task of learning geometry-aware implicits.
arXiv Detail & Related papers (2021-12-17T06:28:01Z) - RefRec: Pseudo-labels Refinement via Shape Reconstruction for
Unsupervised 3D Domain Adaptation [23.275638060852067]
RefRec is the first approach to investigate pseudo-labels and self-training in UDA for point clouds.
We present two main innovations to make self-training effective on 3D data.
arXiv Detail & Related papers (2021-10-21T10:24:05Z) - Plugging Self-Supervised Monocular Depth into Unsupervised Domain
Adaptation for Semantic Segmentation [19.859764556851434]
We propose to exploit self-supervised monocular depth estimation to improve UDA for semantic segmentation.
Our whole proposal allows for achieving state-of-the-art performance (58.8 mIoU) in the GTA5->CS benchmark benchmark.
arXiv Detail & Related papers (2021-10-13T12:48:51Z) - Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation [91.30558794056056]
Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
arXiv Detail & Related papers (2021-10-08T13:20:09Z) - Seeking Similarities over Differences: Similarity-based Domain Alignment
for Adaptive Object Detection [86.98573522894961]
We propose a framework that generalizes the components commonly used by Unsupervised Domain Adaptation (UDA) algorithms for detection.
Specifically, we propose a novel UDA algorithm, ViSGA, that leverages the best design choices and introduces a simple but effective method to aggregate features at instance-level.
We show that both similarity-based grouping and adversarial training allows our model to focus on coarsely aligning feature groups, without being forced to match all instances across loosely aligned domains.
arXiv Detail & Related papers (2021-10-04T13:09:56Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - A Learnable Self-supervised Task for Unsupervised Domain Adaptation on
Point Clouds [7.731213699116179]
We propose a learnable self-supervised task and integrate it into a self-supervision-based point cloud UDA architecture.
In the UDA architecture, an encoder is shared between the networks for the self-supervised task and the main task of point cloud classification or segmentation. Experiments on PointDA-10 and PointSegDA datasets show that the proposed method achieves new state-of-the-art performance on both classification and segmentation tasks of point cloud UDA.
arXiv Detail & Related papers (2021-04-12T02:30:16Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.