TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images
- URL: http://arxiv.org/abs/2303.13743v1
- Date: Fri, 24 Mar 2023 01:52:03 GMT
- Title: TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images
- Authors: Vishal Vinod, Tanmay Shah, Dmitry Lagun
- Abstract summary: We propose TEGLO (Textured EG3D-GLO) for learning 3D representations from single view in-the-wild image collections.
We accomplish this by training a conditional Neural Radiance Field (NeRF) without any explicit 3D supervision.
We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology.
- Score: 1.4502611532302039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work in Neural Fields (NFs) learn 3D representations from
class-specific single view image collections. However, they are unable to
reconstruct the input data preserving high-frequency details. Further, these
methods do not disentangle appearance from geometry and hence are not suitable
for tasks such as texture transfer and editing. In this work, we propose TEGLO
(Textured EG3D-GLO) for learning 3D representations from single view
in-the-wild image collections for a given class of objects. We accomplish this
by training a conditional Neural Radiance Field (NeRF) without any explicit 3D
supervision. We equip our method with editing capabilities by creating a dense
correspondence mapping to a 2D canonical space. We demonstrate that such
mapping enables texture transfer and texture editing without requiring meshes
with shared topology. Our key insight is that by mapping the input image pixels
onto the texture space we can achieve near perfect reconstruction (>= 74 dB
PSNR at 1024^2 resolution). Our formulation allows for high quality 3D
consistent novel view synthesis with high-frequency details at megapixel image
resolution.
Related papers
- TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Unsupervised High-Fidelity Facial Texture Generation and Reconstruction [20.447635896077454]
We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
arXiv Detail & Related papers (2021-10-10T10:59:04Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.