OSTeC: One-Shot Texture Completion
- URL: http://arxiv.org/abs/2012.15370v1
- Date: Wed, 30 Dec 2020 23:53:26 GMT
- Title: OSTeC: One-Shot Texture Completion
- Authors: Baris Gecer, Jiankang Deng, Stefanos Zafeiriou
- Abstract summary: We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
- Score: 86.23018402732748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The last few years have witnessed the great success of non-linear generative
models in synthesizing high-quality photorealistic face images. Many recent 3D
facial texture reconstruction and pose manipulation from a single image
approaches still rely on large and clean face datasets to train image-to-image
Generative Adversarial Networks (GANs). Yet the collection of such a large
scale high-resolution 3D texture dataset is still very costly and difficult to
maintain age/ethnicity balance. Moreover, regression-based approaches suffer
from generalization to the in-the-wild conditions and are unable to fine-tune
to a target-image. In this work, we propose an unsupervised approach for
one-shot 3D facial texture completion that does not require large-scale texture
datasets, but rather harnesses the knowledge stored in 2D face generators. The
proposed approach rotates an input image in 3D and fill-in the unseen regions
by reconstructing the rotated image in a 2D face generator, based on the
visible parts. Finally, we stitch the most visible textures at different angles
in the UV image-plane. Further, we frontalize the target image by projecting
the completed texture into the generator. The qualitative and quantitative
experiments demonstrate that the completed UV textures and frontalized images
are of high quality, resembles the original identity, can be used to train a
texture GAN model for 3DMM fitting and improve pose-invariant face recognition.
Related papers
- 3D-GANTex: 3D Face Reconstruction with StyleGAN3-based Multi-View Images and 3DDFA based Mesh Generation [0.8479659578608233]
This paper introduces a novel method for texture estimation from a single image by first using StyleGAN and 3D Morphable Models.
The result shows that the generated mesh is of high quality with near to accurate texture representation.
arXiv Detail & Related papers (2024-10-21T13:42:06Z) - GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Unsupervised High-Fidelity Facial Texture Generation and Reconstruction [20.447635896077454]
We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
arXiv Detail & Related papers (2021-10-10T10:59:04Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.