DIO: Dataset of 3D Mesh Models of Indoor Objects for Robotics and
Computer Vision Applications
- URL: http://arxiv.org/abs/2402.11836v1
- Date: Mon, 19 Feb 2024 04:58:40 GMT
- Title: DIO: Dataset of 3D Mesh Models of Indoor Objects for Robotics and
Computer Vision Applications
- Authors: Nillan Nimal, Wenbin Li, Ronald Clark, Sajad Saeedi
- Abstract summary: The creation of accurate virtual models of real-world objects is imperative to robotic simulations and applications such as computer vision.
This paper documents the different methods employed for generating a database of mesh models of real-world objects.
- Score: 17.637438333501628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of accurate virtual models of real-world objects is imperative
to robotic simulations and applications such as computer vision, artificial
intelligence, and machine learning. This paper documents the different methods
employed for generating a database of mesh models of real-world objects. These
methods address the tedious and time-intensive process of manually generating
the models using CAD software. Essentially, DSLR/phone cameras were employed to
acquire images of target objects. These images were processed using a
photogrammetry software known as Meshroom to generate a dense surface
reconstruction of the scene. The result produced by Meshroom was edited and
simplified using MeshLab, a mesh-editing software to produce the final model.
Based on the obtained models, this process was effective in modelling the
geometry and texture of real-world objects with high fidelity. An active 3D
scanner was also utilized to accelerate the process for large objects. All
generated models and captured images are made available on the website of the
project.
Related papers
- Scalable Cloud-Native Pipeline for Efficient 3D Model Reconstruction from Monocular Smartphone Images [9.61065600471628]
We present a novel cloud-native pipeline that can automatically reconstruct 3D models from monocular 2D images captured using a smartphone camera.
Our solution produces a reusable 3D model, with embedded materials and textures, exportable and customizable in any external software or 3D engine.
arXiv Detail & Related papers (2024-09-28T11:15:26Z) - Photogrammetry for Digital Twinning Industry 4.0 (I4) Systems [0.43127334486935653]
Digital Twins (DT) are transformational technology that leverage software systems to replicate physical process behavior.
This paper aims to explore the use of photogrammetry and 3D scanning techniques to create accurate visual representation of the 'Physical Process'
The results indicate that photogrammetry using consumer-grade devices can be an efficient and cost-efficient approach to creating DTs for smart manufacturing.
arXiv Detail & Related papers (2024-07-12T04:51:19Z) - LAM3D: Large Image-Point-Cloud Alignment Model for 3D Reconstruction from Single Image [64.94932577552458]
Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images.
Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data.
We introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes.
arXiv Detail & Related papers (2024-05-24T15:09:12Z) - ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance [76.7746870349809]
We present ComboVerse, a 3D generation framework that produces high-quality 3D assets with complex compositions by learning to combine multiple models.
Our proposed framework emphasizes spatial alignment of objects, compared with standard score distillation sampling.
arXiv Detail & Related papers (2024-03-19T03:39:43Z) - ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models [65.22994156658918]
We present a method that learns to generate multi-view images in a single denoising process from real-world data.
We design an autoregressive generation that renders more 3D-consistent images at any viewpoint.
arXiv Detail & Related papers (2024-03-04T07:57:05Z) - Filling the Holes on 3D Heritage Object Surface based on Automatic
Segmentation Algorithm [0.0]
This article proposes an improved method for filling holes on the 3D object surface based on an automatic segmentation.
The method can work on both 3D point cloud surfaces and triangular mesh surface.
arXiv Detail & Related papers (2023-10-16T23:01:39Z) - Visual Localization using Imperfect 3D Models from the Internet [54.731309449883284]
This paper studies how imperfections in 3D models affect localization accuracy.
We show that 3D models from the Internet show promise as an easy-to-obtain scene representation.
arXiv Detail & Related papers (2023-04-12T16:15:05Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - Geometric Processing for Image-based 3D Object Modeling [2.6397379133308214]
This article focuses on introducing the state-of-the-art methods of three major components of geometric processing: 1) geo-referencing; 2) Image dense matching 3) texture mapping.
The largely automated geometric processing of images in a 3D object reconstruction workflow, is becoming a critical part of the reality-based 3D modeling.
arXiv Detail & Related papers (2021-06-27T18:33:30Z) - Mask2CAD: 3D Shape Prediction by Learning to Segment and Retrieve [54.054575408582565]
We propose to leverage existing large-scale datasets of 3D models to understand the underlying 3D structure of objects seen in an image.
We present Mask2CAD, which jointly detects objects in real-world images and for each detected object, optimize for the most similar CAD model and its pose.
This produces a clean, lightweight representation of the objects in an image.
arXiv Detail & Related papers (2020-07-26T00:08:37Z) - Leveraging 2D Data to Learn Textured 3D Mesh Generation [33.32377849866736]
We present the first generative model of textured 3D meshes.
We train our model to explain a distribution of images by modelling each image as a 3D foreground object.
It learns to generate meshes that when rendered, produce images similar to those in its training set.
arXiv Detail & Related papers (2020-04-08T18:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.