Domain Adaptation for Real-World Single View 3D Reconstruction
- URL: http://arxiv.org/abs/2108.10972v1
- Date: Tue, 24 Aug 2021 22:02:27 GMT
- Title: Domain Adaptation for Real-World Single View 3D Reconstruction
- Authors: Brandon Leung, Siddharth Singh, Arik Horodniceanu
- Abstract summary: unsupervised domain adaptation can be used to transfer knowledge from the labeled synthetic source domain to the unlabeled real target domain.
We propose a novel architecture which takes advantage of the fact that in this setting, target domain data is unsupervised with regards to the 3D model but supervised for class labels.
Results are performed with ShapeNet as the source domain and domains within the Object Domain Suite (ODDS) dataset as the target.
- Score: 1.611271868398988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based object reconstruction algorithms have shown remarkable
improvements over classical methods. However, supervised learning based methods
perform poorly when the training data and the test data have different
distributions. Indeed, most current works perform satisfactorily on the
synthetic ShapeNet dataset, but dramatically fail in when presented with real
world images. To address this issue, unsupervised domain adaptation can be used
transfer knowledge from the labeled synthetic source domain and learn a
classifier for the unlabeled real target domain. To tackle this challenge of
single view 3D reconstruction in the real domain, we experiment with a variety
of domain adaptation techniques inspired by the maximum mean discrepancy (MMD)
loss, Deep CORAL, and the domain adversarial neural network (DANN). From these
findings, we additionally propose a novel architecture which takes advantage of
the fact that in this setting, target domain data is unsupervised with regards
to the 3D model but supervised for class labels. We base our framework off a
recent network called pix2vox. Results are performed with ShapeNet as the
source domain and domains within the Object Dataset Domain Suite (ODDS) dataset
as the target, which is a real world multiview, multidomain image dataset. The
domains in ODDS vary in difficulty, allowing us to assess notions of domain gap
size. Our results are the first in the multiview reconstruction literature
using this dataset.
Related papers
- Syn-to-Real Unsupervised Domain Adaptation for Indoor 3D Object Detection [50.448520056844885]
We propose a novel framework for syn-to-real unsupervised domain adaptation in indoor 3D object detection.
Our adaptation results from synthetic dataset 3D-FRONT to real-world datasets ScanNetV2 and SUN RGB-D demonstrate remarkable mAP25 improvements of 9.7% and 9.1% over Source-Only baselines.
arXiv Detail & Related papers (2024-06-17T08:18:41Z) - CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D
Object Detection [14.063365469339812]
LiDAR-based 3D Object Detection methods often do not generalize well to target domains outside the source (or training) data distribution.
We introduce a novel unsupervised domain adaptation (UDA) method, called CMDA, which leverages visual semantic cues from an image modality.
We also introduce a self-training-based learning strategy, wherein a model is adversarially trained to generate domain-invariant features.
arXiv Detail & Related papers (2024-03-06T14:12:38Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Generation for adaption: a Gan-based approach for 3D Domain Adaption
inPoint Cloud [10.614067060304919]
Unsupervised domain adaptation (UDA) seeks to overcome such a problem without target domain labels.
We propose a method that use a Generative adversarial network to generate synthetic data from the source domain.
Experiments show that our approach performs better than other state-of-the-art UDA methods in three popular 3D object/scene datasets.
arXiv Detail & Related papers (2021-02-15T07:24:10Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.