Multi-sensor large-scale dataset for multi-view 3D reconstruction
- URL: http://arxiv.org/abs/2203.06111v4
- Date: Tue, 28 Mar 2023 11:11:08 GMT
- Title: Multi-sensor large-scale dataset for multi-view 3D reconstruction
- Authors: Oleg Voynov, Gleb Bobrovskikh, Pavel Karpyshev, Saveliy Galochkin,
Andrei-Timotei Ardelean, Arseniy Bozhenko, Ekaterina Karmanova, Pavel
Kopanev, Yaroslav Labutin-Rymsho, Ruslan Rakhimov, Aleksandr Safin, Valerii
Serpiva, Alexey Artemov, Evgeny Burnaev, Dzmitry Tsetserukou, Denis Zorin
- Abstract summary: We present a new multi-sensor dataset for multi-view 3D surface reconstruction.
It includes registered RGB and depth data from sensors of different resolutions and modalities: smartphones, Intel RealSense, Microsoft Kinect, industrial cameras, and structured-light scanner.
We provide around 1.4 million images of 107 different scenes acquired from 100 viewing directions under 14 lighting conditions.
- Score: 63.59401680137808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new multi-sensor dataset for multi-view 3D surface
reconstruction. It includes registered RGB and depth data from sensors of
different resolutions and modalities: smartphones, Intel RealSense, Microsoft
Kinect, industrial cameras, and structured-light scanner. The scenes are
selected to emphasize a diverse set of material properties challenging for
existing algorithms. We provide around 1.4 million images of 107 different
scenes acquired from 100 viewing directions under 14 lighting conditions. We
expect our dataset will be useful for evaluation and training of 3D
reconstruction algorithms and for related tasks. The dataset is available at
skoltech3d.appliedai.tech.
Related papers
- Shape2.5D: A Dataset of Texture-less Surfaces for Depth and Normals Estimation [12.757150641117077]
"Shape2.5D" is a novel, large-scale dataset designed to address this gap.
The proposed dataset includes synthetic images rendered with 3D modeling software.
It also includes a real-world subset comprising 4,672 frames captured with a depth camera.
arXiv Detail & Related papers (2024-06-22T12:24:49Z) - OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction [54.706361479680055]
We introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials.
OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.
It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials.
arXiv Detail & Related papers (2024-06-13T07:46:17Z) - DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping [7.050468075029598]
This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments.
The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar.
Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - Zero-Shot Multi-Object Scene Completion [59.325611678171974]
We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image.
Our method outperforms the current state-of-the-art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-21T17:59:59Z) - Den-SOFT: Dense Space-Oriented Light Field DataseT for 6-DOF Immersive Experience [28.651514326042648]
We have built a custom mobile multi-camera large-space dense light field capture system.
Our aim is to contribute to the development of popular 3D scene reconstruction algorithms.
The collected dataset is much denser than existing datasets.
arXiv Detail & Related papers (2024-03-15T02:39:44Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - BS3D: Building-scale 3D Reconstruction from RGB-D Images [25.604775584883413]
We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera.
Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms.
arXiv Detail & Related papers (2023-01-03T11:46:14Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.