High Resolution Surface Reconstruction of Cultural Heritage Objects Using Shape from Polarization Method
- URL: http://arxiv.org/abs/2406.15121v1
- Date: Fri, 21 Jun 2024 13:14:48 GMT
- Title: High Resolution Surface Reconstruction of Cultural Heritage Objects Using Shape from Polarization Method
- Authors: F. S. Mortazavi, M. Saadatseresht,
- Abstract summary: The shape from polarization method has been investigated, a passive method with no drawbacks of active methods.
The resolution of the depth maps can be dramatically increased using the information obtained from the polarization light.
The fusion of polarization and photogrammetric methods is an appropriate solution for achieving high resolution three-dimensional reconstruction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, three-dimensional reconstruction is used in various fields like computer vision, computer graphics, mixed reality and digital twin. The three-dimensional reconstruction of cultural heritage objects is one of the most important applications in this area which is usually accomplished by close range photogrammetry. The problem here is that the images are often noisy, and the dense image matching method has significant limitations to reconstruct the geometric details of cultural heritage objects in practice. Therefore, displaying high-level details in three-dimensional models, especially for cultural heritage objects, is a severe challenge in this field. In this paper, the shape from polarization method has been investigated, a passive method with no drawbacks of active methods. In this method, the resolution of the depth maps can be dramatically increased using the information obtained from the polarization light by rotating a linear polarizing filter in front of a digital camera. Through these polarized images, the surface details of the object can be reconstructed locally with high accuracy. The fusion of polarization and photogrammetric methods is an appropriate solution for achieving high resolution three-dimensional reconstruction. The surface reconstruction assessments have been performed visually and quantitatively. The evaluations showed that the proposed method could significantly reconstruct the surfaces' details in the three-dimensional model compared to the photogrammetric method with 10 times higher depth resolution.
Related papers
- Surface Normal Reconstruction Using Polarization-Unet [0.0]
Shape from polarization (SfP) is one of the best solutions for high-resolution three-dimensional reconstruction of objects.
In this paper, an end-to-end deep learning approach has been presented to produce the surface normal of objects.
arXiv Detail & Related papers (2024-06-21T13:09:58Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - High-Quality RGB-D Reconstruction via Multi-View Uncalibrated
Photometric Stereo and Gradient-SDF [48.29050063823478]
We present a novel multi-view RGB-D based reconstruction method that tackles camera pose, lighting, albedo, and surface normal estimation.
The proposed method formulates the image rendering process using specific physically-based model(s) and optimize the surface's volumetric quantities on the actual surface.
arXiv Detail & Related papers (2022-10-21T19:09:08Z) - Super-resolution 3D Human Shape from a Single Low-Resolution Image [33.70299493354903]
We propose a novel framework to reconstruct super-resolution human shape from a single low-resolution input image.
The proposed framework represents the reconstructed shape with a high-detail implicit function.
arXiv Detail & Related papers (2022-08-23T05:24:39Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.