MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices
- URL: http://arxiv.org/abs/2303.01932v1
- Date: Fri, 3 Mar 2023 14:02:50 GMT
- Title: MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices
- Authors: Kejie Li, Jia-Wang Bian, Robert Castle, Philip H.S. Torr, Victor
Adrian Prisacariu
- Abstract summary: High-quality 3D ground-truth shapes are critical for 3D object reconstruction evaluation.
We introduce a novel multi-view RGBD dataset captured using a mobile device.
We obtain precise 3D ground-truth shape without relying on high-end 3D scanners.
- Score: 78.20154723650333
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High-quality 3D ground-truth shapes are critical for 3D object reconstruction
evaluation. However, it is difficult to create a replica of an object in
reality, and even 3D reconstructions generated by 3D scanners have artefacts
that cause biases in evaluation. To address this issue, we introduce a novel
multi-view RGBD dataset captured using a mobile device, which includes highly
precise 3D ground-truth annotations for 153 object models featuring a diverse
set of 3D structures. We obtain precise 3D ground-truth shape without relying
on high-end 3D scanners by utilising LEGO models with known geometry as the 3D
structures for image capture. The distinct data modality offered by
high-resolution RGB images and low-resolution depth maps captured on a mobile
device, when combined with precise 3D geometry annotations, presents a unique
opportunity for future research on high-fidelity 3D reconstruction.
Furthermore, we evaluate a range of 3D reconstruction algorithms on the
proposed dataset. Project page: http://code.active.vision/MobileBrick/
Related papers
- DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction
Model [86.37536249046943]
textbfDMV3D is a novel 3D generation approach that uses a transformer-based 3D large reconstruction model to denoise multi-view diffusion.
Our reconstruction model incorporates a triplane NeRF representation and can denoise noisy multi-view images via NeRF reconstruction and rendering.
arXiv Detail & Related papers (2023-11-15T18:58:41Z) - Cross3DVG: Cross-Dataset 3D Visual Grounding on Different RGB-D Scans [6.936271803454143]
We present a novel task for cross-dataset visual grounding in 3D scenes (Cross3DVG)
We created RIORefer, a large-scale 3D visual grounding dataset.
It includes more than 63k diverse descriptions of 3D objects within 1,380 indoor RGB-D scans from 3RScan.
arXiv Detail & Related papers (2023-05-23T09:52:49Z) - Anything-3D: Towards Single-view Anything Reconstruction in the Wild [61.090129285205805]
We introduce Anything-3D, a methodical framework that ingeniously combines a series of visual-language models and the Segment-Anything object segmentation model.
Our approach employs a BLIP model to generate textural descriptions, utilize the Segment-Anything model for the effective extraction of objects of interest, and leverages a text-to-image diffusion model to lift object into a neural radiance field.
arXiv Detail & Related papers (2023-04-19T16:39:51Z) - Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative
Radiance Field [16.15190186574068]
We propose Lift3D, an inverted 2D-to-3D generation framework to achieve the data generation objectives.
By lifting well-disentangled 2D GAN to 3D object NeRF, Lift3D provides explicit 3D information of generated objects.
We evaluate the effectiveness of our framework by augmenting autonomous driving datasets.
arXiv Detail & Related papers (2023-04-07T07:43:02Z) - OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic
Perception, Reconstruction and Generation [107.71752592196138]
We propose OmniObject3D, a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects.
It comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets.
Each 3D object is captured with both 2D and 3D sensors, providing textured meshes, point clouds, multiview rendered images, and multiple real-captured videos.
arXiv Detail & Related papers (2023-01-18T18:14:18Z) - Voxel-based 3D Detection and Reconstruction of Multiple Objects from a
Single Image [22.037472446683765]
We learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator.
Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space.
We devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation.
arXiv Detail & Related papers (2021-11-04T18:30:37Z) - DensePose 3D: Lifting Canonical Surface Maps of Articulated Objects to
the Third Dimension [71.71234436165255]
We contribute DensePose 3D, a method that can learn such reconstructions in a weakly supervised fashion from 2D image annotations only.
Because it does not require 3D scans, DensePose 3D can be used for learning a wide range of articulated categories such as different animal species.
We show significant improvements compared to state-of-the-art non-rigid structure-from-motion baselines on both synthetic and real data on categories of humans and animals.
arXiv Detail & Related papers (2021-08-31T18:33:55Z) - Monocular 3D Object Detection with Decoupled Structured Polygon
Estimation and Height-Guided Depth Estimation [41.29145717658494]
This paper proposes a novel unified framework which decomposes the detection problem into a structured polygon prediction task and a depth recovery task.
Compared to the widely-used 3D bounding box proposals, it is shown to be a better representation for 3D detection.
Experiments are conducted on the challenging KITTI benchmark, in which our method achieves state-of-the-art detection accuracy.
arXiv Detail & Related papers (2020-02-05T03:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.