Photogrammetry for Digital Twinning Industry 4.0 (I4) Systems
- URL: http://arxiv.org/abs/2407.18951v1
- Date: Fri, 12 Jul 2024 04:51:19 GMT
- Title: Photogrammetry for Digital Twinning Industry 4.0 (I4) Systems
- Authors: Ahmed Alhamadah, Muntasir Mamun, Henry Harms, Mathew Redondo, Yu-Zheng Lin, Jesus Pacheco, Soheil Salehi, Pratik Satam,
- Abstract summary: Digital Twins (DT) are transformational technology that leverage software systems to replicate physical process behavior.
This paper aims to explore the use of photogrammetry and 3D scanning techniques to create accurate visual representation of the 'Physical Process'
The results indicate that photogrammetry using consumer-grade devices can be an efficient and cost-efficient approach to creating DTs for smart manufacturing.
- Score: 0.43127334486935653
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The onset of Industry 4.0 is rapidly transforming the manufacturing world through the integration of cloud computing, machine learning (ML), artificial intelligence (AI), and universal network connectivity, resulting in performance optimization and increase productivity. Digital Twins (DT) are one such transformational technology that leverages software systems to replicate physical process behavior, representing the physical process in a digital environment. This paper aims to explore the use of photogrammetry (which is the process of reconstructing physical objects into virtual 3D models using photographs) and 3D Scanning techniques to create accurate visual representation of the 'Physical Process', to interact with the ML/AI based behavior models. To achieve this, we have used a readily available consumer device, the iPhone 15 Pro, which features stereo vision capabilities, to capture the depth of an Industry 4.0 system. By processing these images using 3D scanning tools, we created a raw 3D model for 3D modeling and rendering software for the creation of a DT model. The paper highlights the reliability of this method by measuring the error rate in between the ground truth (measurements done manually using a tape measure) and the final 3D model created using this method. The overall mean error is 4.97\% and the overall standard deviation error is 5.54\% between the ground truth measurements and their photogrammetry counterparts. The results from this work indicate that photogrammetry using consumer-grade devices can be an efficient and cost-efficient approach to creating DTs for smart manufacturing, while the approaches flexibility allows for iterative improvements of the models over time.
Related papers
- Scalable Cloud-Native Pipeline for Efficient 3D Model Reconstruction from Monocular Smartphone Images [9.61065600471628]
We present a novel cloud-native pipeline that can automatically reconstruct 3D models from monocular 2D images captured using a smartphone camera.
Our solution produces a reusable 3D model, with embedded materials and textures, exportable and customizable in any external software or 3D engine.
arXiv Detail & Related papers (2024-09-28T11:15:26Z) - 3D-VirtFusion: Synthetic 3D Data Augmentation through Generative Diffusion Models and Controllable Editing [52.68314936128752]
We propose a new paradigm to automatically generate 3D labeled training data by harnessing the power of pretrained large foundation models.
For each target semantic class, we first generate 2D images of a single object in various structure and appearance via diffusion models and chatGPT generated text prompts.
We transform these augmented images into 3D objects and construct virtual scenes by random composition.
arXiv Detail & Related papers (2024-08-25T09:31:22Z) - Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication [50.541882834405946]
We introduce Atlas3D, an automatic and easy-to-implement text-to-3D method.
Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization.
We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments.
arXiv Detail & Related papers (2024-05-28T18:33:18Z) - ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models [65.22994156658918]
We present a method that learns to generate multi-view images in a single denoising process from real-world data.
We design an autoregressive generation that renders more 3D-consistent images at any viewpoint.
arXiv Detail & Related papers (2024-03-04T07:57:05Z) - DIO: Dataset of 3D Mesh Models of Indoor Objects for Robotics and
Computer Vision Applications [17.637438333501628]
The creation of accurate virtual models of real-world objects is imperative to robotic simulations and applications such as computer vision.
This paper documents the different methods employed for generating a database of mesh models of real-world objects.
arXiv Detail & Related papers (2024-02-19T04:58:40Z) - AutoDecoding Latent 3D Diffusion Models [95.7279510847827]
We present a novel approach to the generation of static and articulated 3D assets that has a 3D autodecoder at its core.
The 3D autodecoder framework embeds properties learned from the target dataset in the latent space.
We then identify the appropriate intermediate volumetric latent space, and introduce robust normalization and de-normalization operations.
arXiv Detail & Related papers (2023-07-07T17:59:14Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - Fast mesh denoising with data driven normal filtering using deep
variational autoencoders [6.25118865553438]
We propose a fast and robust denoising method for dense 3D scanned industrial models.
The proposed approach employs conditional variational autoencoders to effectively filter face normals.
For 3D models with more than 1e4 faces, the presented pipeline is twice as fast as methods with equivalent reconstruction error.
arXiv Detail & Related papers (2021-11-24T20:25:15Z) - Geometric Processing for Image-based 3D Object Modeling [2.6397379133308214]
This article focuses on introducing the state-of-the-art methods of three major components of geometric processing: 1) geo-referencing; 2) Image dense matching 3) texture mapping.
The largely automated geometric processing of images in a 3D object reconstruction workflow, is becoming a critical part of the reality-based 3D modeling.
arXiv Detail & Related papers (2021-06-27T18:33:30Z) - A novel method for object detection using deep learning and CAD models [0.4588028371034407]
Object Detection (OD) is an important computer vision problem for industry, which can be used for quality control in the production lines.
Recently, Deep Learning (DL) methods have enabled practitioners to train OD models performing well on complex real world images.
In this paper, we introduce a fully automated method that uses a CAD model of an object and returns a fully trained OD model for detecting this object.
arXiv Detail & Related papers (2021-02-12T19:19:45Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.