Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction
- URL: http://arxiv.org/abs/2105.07474v1
- Date: Sun, 16 May 2021 16:35:44 GMT
- Title: Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction
- Authors: Baris Gecer, Stylianos Ploumpis, Irene Kotsia, Stefanos Zafeiriou
- Abstract summary: We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
- Score: 76.1612334630256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A lot of work has been done towards reconstructing the 3D facial structure
from single images by capitalizing on the power of Deep Convolutional Neural
Networks (DCNNs). In the recent works, the texture features either correspond
to components of a linear texture space or are learned by auto-encoders
directly from in-the-wild images. In all cases, the quality of the facial
texture reconstruction is still not capable of modeling facial texture with
high-frequency details. In this paper, we take a radically different approach
and harness the power of Generative Adversarial Networks (GANs) and DCNNs in
order to reconstruct the facial texture and shape from single images. That is,
we utilize GANs to train a very powerful facial texture prior \edit{from a
large-scale 3D texture dataset}. Then, we revisit the original 3D Morphable
Models (3DMMs) fitting making use of non-linear optimization to find the
optimal latent parameters that best reconstruct the test image but under a new
perspective. In order to be robust towards initialisation and expedite the
fitting process, we propose a novel self-supervised regression based approach.
We demonstrate excellent results in photorealistic and identity preserving 3D
face reconstructions and achieve for the first time, to the best of our
knowledge, facial texture reconstruction with high-frequency details.
Related papers
- High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction
from a Single Image [19.0074836183624]
We propose a novel self-supervised learning framework for reconstructing high-quality 3D faces from single-view images in-the-wild.
Our framework substantially outperforms state-of-the-art approaches in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2021-11-16T08:10:24Z) - Unsupervised High-Fidelity Facial Texture Generation and Reconstruction [20.447635896077454]
We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
arXiv Detail & Related papers (2021-10-10T10:59:04Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.