TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering
- URL: http://arxiv.org/abs/2303.15060v1
- Date: Mon, 27 Mar 2023 10:07:52 GMT
- Title: TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering
- Authors: Jaehoon Choi, Dongki Jung, Taejae Lee, Sangwook Kim, Youngdong Jung,
Dinesh Manocha, Donghwan Lee
- Abstract summary: We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
- Score: 54.35405028643051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new pipeline for acquiring a textured mesh in the wild with a
single smartphone which offers access to images, depth maps, and valid poses.
Our method first introduces an RGBD-aided structure from motion, which can
yield filtered depth maps and refines camera poses guided by corresponding
depth. Then, we adopt the neural implicit surface reconstruction method, which
allows for high-quality mesh and develops a new training process for applying a
regularization provided by classical multi-view stereo methods. Moreover, we
apply a differentiable rendering to fine-tune incomplete texture maps and
generate textures which are perceptually closer to the original scene. Our
pipeline can be applied to any common objects in the real world without the
need for either in-the-lab environments or accurate mask images. We demonstrate
results of captured objects with complex shapes and validate our method
numerically against existing 3D reconstruction and texture mapping methods.
Related papers
- Refinement of Monocular Depth Maps via Multi-View Differentiable Rendering [4.717325308876748]
We present a novel approach to generate view consistent and detailed depth maps from a number of posed images.
We leverage advances in monocular depth estimation, which generate topologically complete, but metrically inaccurate depth maps.
Our method is able to generate dense, detailed, high-quality depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art depth reconstruction approaches.
arXiv Detail & Related papers (2024-10-04T18:50:28Z) - Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering [11.228453237603834]
We present a novel fine-grained multi-view hand mesh reconstruction method that leverages inverse rendering to restore hand poses and intricate details.
We also introduce a novel Hand Albedo and Mesh (HAM) optimization module to refine both the hand mesh and textures.
Our proposed approach outperforms the state-of-the-art methods on both reconstruction accuracy and rendering quality.
arXiv Detail & Related papers (2024-07-08T07:28:24Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.