Unsupervised High-Fidelity Facial Texture Generation and Reconstruction
- URL: http://arxiv.org/abs/2110.04760v1
- Date: Sun, 10 Oct 2021 10:59:04 GMT
- Title: Unsupervised High-Fidelity Facial Texture Generation and Reconstruction
- Authors: Ron Slossberg, Ibrahim Jubran, Ron Kimmel
- Abstract summary: We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
- Score: 20.447635896077454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many methods have been proposed over the years to tackle the task of facial
3D geometry and texture recovery from a single image. Such methods often fail
to provide high-fidelity texture without relying on 3D facial scans during
training. In contrast, the complementary task of 3D facial generation has not
received as much attention. As opposed to the 2D texture domain, where GANs
have proven to produce highly realistic facial images, the more challenging 3D
geometry domain has not yet caught up to the same levels of realism and
diversity.
In this paper, we propose a novel unified pipeline for both tasks, generation
of both geometry and texture, and recovery of high-fidelity texture. Our
texture model is learned, in an unsupervised fashion, from natural images as
opposed to scanned texture maps. To the best of our knowledge, this is the
first such unified framework independent of scanned textures.
Our novel training pipeline incorporates a pre-trained 2D facial generator
coupled with a deep feature manipulation methodology. By applying precise 3DMM
fitting, we can seamlessly integrate our modeled textures into synthetically
generated background images forming a realistic composition of our textured
model with background, hair, teeth, and body. This enables us to apply transfer
learning from the domain of 2D image generation, thus, benefiting greatly from
the impressive results obtained in this domain.
We provide a comprehensive study on several recent methods comparing our
model in generation and reconstruction tasks. As the extensive qualitative, as
well as quantitative analysis, demonstrate, we achieve state-of-the-art results
for both tasks.
Related papers
- Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Leveraging 2D Data to Learn Textured 3D Mesh Generation [33.32377849866736]
We present the first generative model of textured 3D meshes.
We train our model to explain a distribution of images by modelling each image as a 3D foreground object.
It learns to generate meshes that when rendered, produce images similar to those in its training set.
arXiv Detail & Related papers (2020-04-08T18:00:37Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.