Embedded Shape Matching in Photogrammetry Data for Modeling Making
Knowledge
- URL: http://arxiv.org/abs/2312.13489v1
- Date: Wed, 20 Dec 2023 23:52:53 GMT
- Title: Embedded Shape Matching in Photogrammetry Data for Modeling Making
Knowledge
- Authors: Demircan Tas, Mine \"Ozkar
- Abstract summary: We use two-dimensional samples obtained by projection to overcome the difficulties of pattern recognition in three-dimensional models.
The application is based on photogrammetric capture of a few examples of Zeugma mosaics and three-dimensional digital modeling of a set of Seljuk era brick walls.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In three-dimensional models obtained by photogrammetry of existing
structures, all of the shapes that the eye can select cannot always find their
equivalents in the geometric components of the model. However, the matching of
meaningful parts and assemblages with the records acquired with rapid and
detailed documentation methods will provide an advantage for the creation of
information models of existing structures. While aiming to produce answers to
this problem and in order to overcome the difficulties of pattern recognition
in three-dimensional models, we used two-dimensional samples obtained by
projection. Processing techniques such as ambient occlusion, curvature and
normal maps are commonly used in modern computer graphics applications that
enable the representation of three-dimensional surface properties in
two-dimensional data sets. The method we propose is based on the recognition of
patterns through these mappings instead of the usual light-based visualization.
The first stage of the application is photogrammetric capture of a few examples
of Zeugma mosaics and three-dimensional digital modeling of a set of Seljuk era
brick walls based on knowledge obtained through architectural history
literature. The second stage covers the creation of digital models byprocessing
the surface representation obtained from this data using Alice Vision,
OpenCV-Python, and Autodesk Maya to include information on aspects of the
making of the walls. What is envisioned for the next stages is that the mapping
data contributes and supports the knowledge for rule-based design and making
processesof cultural heritage.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - From Flat to Spatial: Comparison of 4 methods constructing 3D, 2 and 1/2D Models from 2D Plans with neural networks [0.0]
The conversion of single images into 2 and 1/2D and 3D meshes is a promising technology that enhances design visualization and efficiency.
This paper evaluates four innovative methods: "One-2-3-45," " CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model," "Instant Mesh," and "Image-to-Mesh"
arXiv Detail & Related papers (2024-07-29T13:01:20Z) - Scalable Scene Modeling from Perspective Imaging: Physics-based Appearance and Geometry Inference [3.2229099973277076]
dissertation presents a fraction of contributions that advances 3D scene modeling to its state of the art.
In contrast to the prevailing deep learning methods, as a core contribution, this thesis aims to develop algorithms that follow first principles.
arXiv Detail & Related papers (2024-04-01T17:09:40Z) - StructuredMesh: 3D Structured Optimization of Fa\c{c}ade Components on
Photogrammetric Mesh Models using Binary Integer Programming [17.985961236568663]
We present StructuredMesh, a novel approach for reconstructing faccade structures conforming to the regularity of buildings within photogrammetric mesh models.
Our method involves capturing multi-view color and depth images of the building model using a virtual camera.
We then utilize the depth image to remap these boxes into 3D space, generating an initial faccade layout.
arXiv Detail & Related papers (2023-06-07T06:40:54Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Geometric Processing for Image-based 3D Object Modeling [2.6397379133308214]
This article focuses on introducing the state-of-the-art methods of three major components of geometric processing: 1) geo-referencing; 2) Image dense matching 3) texture mapping.
The largely automated geometric processing of images in a 3D object reconstruction workflow, is becoming a critical part of the reality-based 3D modeling.
arXiv Detail & Related papers (2021-06-27T18:33:30Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.