A Method to Generate High Precision Mesh Model and RGB-D Datasetfor 6D
Pose Estimation Task
- URL: http://arxiv.org/abs/2011.08771v1
- Date: Tue, 17 Nov 2020 16:56:57 GMT
- Title: A Method to Generate High Precision Mesh Model and RGB-D Datasetfor 6D
Pose Estimation Task
- Authors: Minglei Lu, Yu Guo, Fei Wang, Zheng Dang
- Abstract summary: We propose a new method for object reconstruction, which takes into account the speed, accuracy and robustness.
Our data is more close to the rendering data, which shrinking the gap between the real data and synthetic data further.
- Score: 10.24919213221012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, 3D version has been improved greatly due to the development of deep
neural networks. A high quality dataset is important to the deep learning
method. Existing datasets for 3D vision has been constructed, such as Bigbird
and YCB. However, the depth sensors used to make these datasets are out of
date, which made the resolution and accuracy of the datasets cannot full fill
the higher standards of demand. Although the equipment and technology got
better, but no one was trying to collect new and better dataset. Here we are
trying to fill that gap. To this end, we propose a new method for object
reconstruction, which takes into account the speed, accuracy and robustness.
Our method could be used to produce large dataset with better and more accurate
annotation. More importantly, our data is more close to the rendering data,
which shrinking the gap between the real data and synthetic data further.
Related papers
- ARKit LabelMaker: A New Scale for Indoor 3D Scene Understanding [51.509115746992165]
We introduce ARKit LabelMaker, the first large-scale, real-world 3D dataset with dense semantic annotations.
We also push forward the state-of-the-art performance on ScanNet and ScanNet200 dataset with prevalent 3D semantic segmentation models.
arXiv Detail & Related papers (2024-10-17T14:44:35Z) - Domain-Transferred Synthetic Data Generation for Improving Monocular Depth Estimation [9.812476193015488]
We propose a method of data generation in simulation using 3D synthetic environments and CycleGAN domain transfer.
We compare this method of data generation to the popular NYUDepth V2 dataset by training a depth estimation model based on the DenseDepth structure using different training sets of real and simulated data.
We evaluate the performance of the models on newly collected images and LiDAR depth data from a Husky robot to verify the generalizability of the approach and show that GAN-transformed data can serve as an effective alternative to real-world data, particularly in depth estimation.
arXiv Detail & Related papers (2024-05-02T09:21:10Z) - Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data [87.61900472933523]
This work presents Depth Anything, a highly practical solution for robust monocular depth estimation.
We scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data.
We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos.
arXiv Detail & Related papers (2024-01-19T18:59:52Z) - MDT3D: Multi-Dataset Training for LiDAR 3D Object Detection
Generalization [3.8243923744440926]
3D object detection models trained on a source dataset with a specific point distribution have shown difficulties in generalizing to unseen datasets.
We leverage the information available from several annotated source datasets with our Multi-Dataset Training for 3D Object Detection (MDT3D) method.
We show how we managed the mix of datasets during training and finally introduce a new cross-dataset augmentation method: cross-dataset object injection.
arXiv Detail & Related papers (2023-08-02T08:20:00Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - Expanding Small-Scale Datasets with Guided Imagination [92.5276783917845]
dataset expansion is a new task aimed at expanding a ready-to-use small dataset by automatically creating new labeled samples.
GIF conducts data imagination by optimizing the latent features of the seed data in the semantically meaningful space of the prior model.
GIF-SD obtains 13.5% higher model accuracy on natural image datasets than unguided expansion with SD.
arXiv Detail & Related papers (2022-11-25T09:38:22Z) - THE Benchmark: Transferable Representation Learning for Monocular Height
Estimation [25.872962101146115]
We propose a new benchmark dataset to study the transferability of height estimation models in a cross-dataset setting.
This benchmark dataset includes a newly proposed large-scale synthetic dataset, a newly collected real-world dataset, and four existing datasets from different cities.
In this paper, we propose a scale-deformable convolution module to enhance the window-based Transformer for handling the scale-variation problem in the height estimation task.
arXiv Detail & Related papers (2021-12-30T09:40:26Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Semi-synthesis: A fast way to produce effective datasets for stereo
matching [16.602343511350252]
Close-to-real-scene texture rendering is a key factor to boost up stereo matching performance.
We propose semi-synthetic, an effective and fast way to synthesize large amount of data with close-to-real-scene texture.
With further fine-tuning on the real dataset, we also achieve SOTA performance on Middlebury and competitive results on KITTI and ETH3D datasets.
arXiv Detail & Related papers (2021-01-26T14:34:49Z) - Bridging the Reality Gap for Pose Estimation Networks using Sensor-Based
Domain Randomization [1.4290119665435117]
Methods trained on synthetic data use 2D images, as domain randomization in 2D is more developed.
Our method integrates the 3D data into the network to increase the accuracy of the pose estimation.
Experiments on three large pose estimation benchmarks show that the presented method outperforms previous methods trained on synthetic data.
arXiv Detail & Related papers (2020-11-17T09:12:11Z) - Improving Deep Stereo Network Generalization with Geometric Priors [93.09496073476275]
Large datasets of diverse real-world scenes with dense ground truth are difficult to obtain.
Many algorithms rely on small real-world datasets of similar scenes or synthetic datasets.
We propose to incorporate prior knowledge of scene geometry into an end-to-end stereo network to help networks generalize better.
arXiv Detail & Related papers (2020-08-25T15:24:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.