SCoDA: Domain Adaptive Shape Completion for Real Scans
- URL: http://arxiv.org/abs/2304.10179v2
- Date: Mon, 24 Apr 2023 06:31:59 GMT
- Title: SCoDA: Domain Adaptive Shape Completion for Real Scans
- Authors: Yushuang Wu, Zizheng Yan, Ce Chen, Lai Wei, Xiao Li, Guanbin Li, Yihao
Li, Shuguang Cui, Xiaoguang Han
- Abstract summary: 3D shape completion from point clouds is a challenging task, especially from scans of real-world objects.
We propose a new task, SCoDA, for the domain adaptation of real scan shape completion from synthetic data.
We propose a novel cross-domain feature fusion method for knowledge transfer and a novel volume-consistent self-training framework for robust learning from real data.
- Score: 78.92028595499245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D shape completion from point clouds is a challenging task, especially from
scans of real-world objects. Considering the paucity of 3D shape ground truths
for real scans, existing works mainly focus on benchmarking this task on
synthetic data, e.g. 3D computer-aided design models. However, the domain gap
between synthetic and real data limits the generalizability of these methods.
Thus, we propose a new task, SCoDA, for the domain adaptation of real scan
shape completion from synthetic data. A new dataset, ScanSalon, is contributed
with a bunch of elaborate 3D models created by skillful artists according to
scans. To address this new task, we propose a novel cross-domain feature fusion
method for knowledge transfer and a novel volume-consistent self-training
framework for robust learning from real data. Extensive experiments prove our
method is effective to bring an improvement of 6%~7% mIoU.
Related papers
- Syn-to-Real Unsupervised Domain Adaptation for Indoor 3D Object Detection [50.448520056844885]
We propose a novel framework for syn-to-real unsupervised domain adaptation in indoor 3D object detection.
Our adaptation results from synthetic dataset 3D-FRONT to real-world datasets ScanNetV2 and SUN RGB-D demonstrate remarkable mAP25 improvements of 9.7% and 9.1% over Source-Only baselines.
arXiv Detail & Related papers (2024-06-17T08:18:41Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation [66.3814684757376]
This work presents Zero123-6D, the first work to demonstrate the utility of Diffusion Model-based novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level.
The outlined method shows reduction in data requirements, removal of the necessity of depth information in zero-shot category-level 6D pose estimation task, and increased performance, quantitatively demonstrated through experiments on the CO3D dataset.
arXiv Detail & Related papers (2024-03-21T10:38:18Z) - Robust Category-Level 3D Pose Estimation from Synthetic Data [17.247607850702558]
We introduce SyntheticP3D, a new synthetic dataset for object pose estimation generated from CAD models.
We propose a novel approach (CC3D) for training neural mesh models that perform pose estimation via inverse rendering.
arXiv Detail & Related papers (2023-05-25T14:56:03Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - Towards Deep Learning-based 6D Bin Pose Estimation in 3D Scans [0.0]
This paper focuses on a specific task of 6D pose estimation of a bin in 3D scans.
We present a high-quality dataset composed of synthetic data and real scans captured by a structured-light scanner with precise annotations.
arXiv Detail & Related papers (2021-12-17T16:19:06Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.