Generation for adaption: a Gan-based approach for 3D Domain Adaption
inPoint Cloud
- URL: http://arxiv.org/abs/2102.07373v1
- Date: Mon, 15 Feb 2021 07:24:10 GMT
- Title: Generation for adaption: a Gan-based approach for 3D Domain Adaption
inPoint Cloud
- Authors: Junxuan Huang and Chunming Qiao
- Abstract summary: Unsupervised domain adaptation (UDA) seeks to overcome such a problem without target domain labels.
We propose a method that use a Generative adversarial network to generate synthetic data from the source domain.
Experiments show that our approach performs better than other state-of-the-art UDA methods in three popular 3D object/scene datasets.
- Score: 10.614067060304919
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent deep networks have achieved good performance on a variety of 3d points
classification tasks. However, these models often face challenges in "wild
tasks".There are considerable differences between the labeled training/source
data collected by one Lidar and unseen test/target data collected by a
different Lidar. Unsupervised domain adaptation (UDA) seeks to overcome such a
problem without target domain labels.Instead of aligning features between
source data and target data,we propose a method that use a Generative
adversarial network to generate synthetic data from the source domain so that
the output is close to the target domain.Experiments show that our approach
performs better than other state-of-the-art UDA methods in three popular 3D
object/scene datasets (i.e., ModelNet, ShapeNet and ScanNet) for cross-domain
3D objects classification.
Related papers
- CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D
Object Detection [14.063365469339812]
LiDAR-based 3D Object Detection methods often do not generalize well to target domains outside the source (or training) data distribution.
We introduce a novel unsupervised domain adaptation (UDA) method, called CMDA, which leverages visual semantic cues from an image modality.
We also introduce a self-training-based learning strategy, wherein a model is adversarially trained to generate domain-invariant features.
arXiv Detail & Related papers (2024-03-06T14:12:38Z) - Density-Insensitive Unsupervised Domain Adaption on 3D Object Detection [19.703181080679176]
3D object detection from point clouds is crucial in safety-critical autonomous driving.
We propose a density-insensitive domain adaption framework to address the density-induced domain gap.
Experimental results on three widely adopted 3D object detection datasets demonstrate that our proposed domain adaption method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-19T06:33:07Z) - SSDA3D: Semi-supervised Domain Adaptation for 3D Object Detection from
Point Cloud [125.9472454212909]
We present a novel Semi-Supervised Domain Adaptation method for 3D object detection (SSDA3D)
SSDA3D includes an Inter-domain Adaptation stage and an Intra-domain Generalization stage.
Experiments show that, with only 10% labeled target data, our SSDA3D can surpass the fully-supervised oracle model with 100% target label.
arXiv Detail & Related papers (2022-12-06T09:32:44Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Domain Adaptation for Real-World Single View 3D Reconstruction [1.611271868398988]
unsupervised domain adaptation can be used to transfer knowledge from the labeled synthetic source domain to the unlabeled real target domain.
We propose a novel architecture which takes advantage of the fact that in this setting, target domain data is unsupervised with regards to the 3D model but supervised for class labels.
Results are performed with ShapeNet as the source domain and domains within the Object Domain Suite (ODDS) dataset as the target.
arXiv Detail & Related papers (2021-08-24T22:02:27Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.