AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild"
- URL: http://arxiv.org/abs/2003.13845v1
- Date: Mon, 30 Mar 2020 22:17:54 GMT
- Title: AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild"
- Authors: Alexandros Lattas, Stylianos Moschoglou, Baris Gecer, Stylianos
Ploumpis, Vasileios Triantafyllou, Abhijeet Ghosh, Stefanos Zafeiriou
- Abstract summary: AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
- Score: 105.28776215113352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last years, with the advent of Generative Adversarial Networks
(GANs), many face analysis tasks have accomplished astounding performance, with
applications including, but not limited to, face generation and 3D face
reconstruction from a single "in-the-wild" image. Nevertheless, to the best of
our knowledge, there is no method which can produce high-resolution
photorealistic 3D faces from "in-the-wild" images and this can be attributed to
the: (a) scarcity of available data for training, and (b) lack of robust
methodologies that can successfully be applied on very high-resolution data. In
this paper, we introduce AvatarMe, the first method that is able to reconstruct
photorealistic 3D faces from a single "in-the-wild" image with an increasing
level of detail. To achieve this, we capture a large dataset of facial shape
and reflectance and build on a state-of-the-art 3D texture and shape
reconstruction method and successively refine its results, while generating the
per-pixel diffuse and specular components that are required for realistic
rendering. As we demonstrate in a series of qualitative and quantitative
experiments, AvatarMe outperforms the existing arts by a significant margin and
reconstructs authentic, 4K by 6K-resolution 3D faces from a single
low-resolution image that, for the first time, bridges the uncanny valley.
Related papers
- GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction
from a Single Image [19.0074836183624]
We propose a novel self-supervised learning framework for reconstructing high-quality 3D faces from single-view images in-the-wild.
Our framework substantially outperforms state-of-the-art approaches in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2021-11-16T08:10:24Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.