Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled
Primitives
- URL: http://arxiv.org/abs/2205.12627v1
- Date: Wed, 25 May 2022 10:07:07 GMT
- Title: Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled
Primitives
- Authors: Xinke Li, Henghui Ding, Zekun Tong, Yuwei Wu, Yeow Meng Chee
- Abstract summary: We propose a cost-effective method for automatically generating a large amount of 3D objects with annotations.
These objects are auto-annotated with part labels originating from primitives.
Considering the large overhead of learning on the generated dataset, we propose a dataset distillation strategy.
- Score: 44.03149443379618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous advancements in deep learning can be attributed to the access to
large-scale and well-annotated datasets. However, such a dataset is
prohibitively expensive in 3D computer vision due to the substantial collection
cost. To alleviate this issue, we propose a cost-effective method for
automatically generating a large amount of 3D objects with annotations. In
particular, we synthesize objects simply by assembling multiple random
primitives. These objects are thus auto-annotated with part labels originating
from primitives. This allows us to perform multi-task learning by combining the
supervised segmentation with unsupervised reconstruction. Considering the large
overhead of learning on the generated dataset, we further propose a dataset
distillation strategy to remove redundant samples regarding a target dataset.
We conduct extensive experiments for the downstream tasks of 3D object
classification. The results indicate that our dataset, together with multi-task
pretraining on its annotations, achieves the best performance compared to other
commonly used datasets. Further study suggests that our strategy can improve
the model performance by pretraining and fine-tuning scheme, especially for the
dataset with a small scale. In addition, pretraining with the proposed dataset
distillation method can save 86\% of the pretraining time with negligible
performance degradation. We expect that our attempt provides a new data-centric
perspective for training 3D deep models.
Related papers
- The Why, When, and How to Use Active Learning in Large-Data-Driven 3D
Object Detection for Safe Autonomous Driving: An Empirical Exploration [1.2815904071470705]
entropy querying is a promising strategy for selecting data that enhances model learning in resource-constrained environments.
Our findings suggest that entropy querying is a promising strategy for selecting data that enhances model learning in resource-constrained environments.
arXiv Detail & Related papers (2024-01-30T00:14:13Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - Contrastive Lift: 3D Object Instance Segmentation by Slow-Fast
Contrastive Fusion [110.84357383258818]
We propose a novel approach to lift 2D segments to 3D and fuse them by means of a neural field representation.
The core of our approach is a slow-fast clustering objective function, which is scalable and well-suited for scenes with a large number of objects.
Our approach outperforms the state-of-the-art on challenging scenes from the ScanNet, Hypersim, and Replica datasets.
arXiv Detail & Related papers (2023-06-07T17:57:45Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Self-supervised Learning of 3D Object Understanding by Data Association
and Landmark Estimation for Image Sequence [15.815583594196488]
3D object under-standing from 2D image is a challenging task that infers ad-ditional dimension from reduced-dimensional information.
It is challenging to obtain large amount of 3D dataset since achieving 3D annotation is expensive andtime-consuming.
We propose a strategy to exploit multipleobservations of the object in the image sequence in orderto surpass the self-performance.
arXiv Detail & Related papers (2021-04-14T18:59:08Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.