3DFaceFill: An Analysis-By-Synthesis Approach to Face Completion
- URL: http://arxiv.org/abs/2110.10395v1
- Date: Wed, 20 Oct 2021 06:31:47 GMT
- Title: 3DFaceFill: An Analysis-By-Synthesis Approach to Face Completion
- Authors: Rahul Dey and Vishnu Boddeti
- Abstract summary: 3DFaceFill is an analysis-by-synthesis approach for face completion that explicitly considers the image formation process.
It comprises three components, (1) an encoder that disentangles the face into its constituent 3D mesh, 3D pose, illumination and albedo factors, (2) an autoencoder that inpaints the UV representation of facial albedo, and (3) an autoencoder that resynthesizes the completed face.
- Score: 2.0305676256390934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing face completion solutions are primarily driven by end-to-end models
that directly generate 2D completions of 2D masked faces. By having to
implicitly account for geometric and photometric variations in facial shape and
appearance, such approaches result in unrealistic completions, especially under
large variations in pose, shape, illumination and mask sizes. To alleviate
these limitations, we introduce 3DFaceFill, an analysis-by-synthesis approach
for face completion that explicitly considers the image formation process. It
comprises three components, (1) an encoder that disentangles the face into its
constituent 3D mesh, 3D pose, illumination and albedo factors, (2) an
autoencoder that inpaints the UV representation of facial albedo, and (3) a
renderer that resynthesizes the completed face. By operating on the UV
representation, 3DFaceFill affords the power of correspondence and allows us to
naturally enforce geometrical priors (e.g. facial symmetry) more effectively.
Quantitatively, 3DFaceFill improves the state-of-the-art by up to 4dB higher
PSNR and 25% better LPIPS for large masks. And, qualitatively, it leads to
demonstrably more photorealistic face completions over a range of masks and
occlusions while preserving consistency in global and component-wise shape,
pose, illumination and eye-gaze.
Related papers
- CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Non-Deterministic Face Mask Removal Based On 3D Priors [3.8502825594372703]
The proposed approach integrates a multi-task 3D face reconstruction module with a face inpainting module.
By gradually controlling the 3D shape parameters, our method generates high-quality dynamic inpainting results with different expressions and mouth movements.
arXiv Detail & Related papers (2022-02-20T16:27:44Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.