Learning to regulate 3D head shape by removing occluding hair from
in-the-wild images
- URL: http://arxiv.org/abs/2208.12078v1
- Date: Thu, 25 Aug 2022 13:18:26 GMT
- Title: Learning to regulate 3D head shape by removing occluding hair from
in-the-wild images
- Authors: Sohan Anisetty, Varsha Saravanabavan, Cai Yiyu
- Abstract summary: We present a novel approach for modeling the upper head by removing occluding hair and reconstructing the skin.
Our unsupervised 3DMM model achieves state-of-the-art results on popular benchmarks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent 3D face reconstruction methods reconstruct the entire head compared to
earlier approaches which only model the face. Although these methods accurately
reconstruct facial features, they do not explicitly regulate the upper part of
the head. Extracting information about this part of the head is challenging due
to varying degrees of occlusion by hair. We present a novel approach for
modeling the upper head by removing occluding hair and reconstructing the skin,
revealing information about the head shape. We introduce three objectives: 1) a
dice consistency loss that enforces similarity between the overall head shape
of the source and rendered image, 2) a scale consistency loss to ensure that
head shape is accurately reproduced even if the upper part of the head is not
visible, and 3) a 71 landmark detector trained using a moving average loss
function to detect additional landmarks on the head. These objectives are used
to train an encoder in an unsupervised manner to regress FLAME parameters from
in-the-wild input images. Our unsupervised 3DMM model achieves state-of-the-art
results on popular benchmarks and can be used to infer the head shape, facial
features, and textures for direct use in animation or avatar creation.
Related papers
- HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation [95.58892028614444]
This work presents HeadArtist for 3D head generation from text descriptions.
We come up with an efficient pipeline that optimize a parameterized 3D head model under the supervision of the prior distillation.
Experimental results suggest that our approach delivers high-quality 3D head sculptures with adequate geometry and photorealistic appearance.
arXiv Detail & Related papers (2023-12-12T18:59:25Z) - Controllable Dynamic Appearance for Neural 3D Portraits [54.29179484318194]
We propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions.
CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
arXiv Detail & Related papers (2023-09-20T02:24:40Z) - Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation [56.267877301135634]
Current full head generation methods require a large number of 3D scans or multi-view images to train the model.
We propose Head3D, a method to generate full 3D heads with limited multi-view images.
Our model achieves cost-efficient and diverse complete head generation with photo-realistic renderings and high-quality geometry representations.
arXiv Detail & Related papers (2023-03-28T11:12:26Z) - Single-Camera 3D Head Fitting for Mixed Reality Clinical Applications [41.63137498124499]
Our goal is to reconstruct the head model of each person to enable future mixed reality applications.
We recover a dense 3D reconstruction and camera information via structure-from-motion and multi-view stereo.
These are then used in a new two-stage fitting process to recover the 3D head shape.
arXiv Detail & Related papers (2021-09-06T21:03:52Z) - Prior-Guided Multi-View 3D Head Reconstruction [28.126115947538572]
Previous multi-view stereo methods suffer from low-frequency structures such as unclear head structures and inaccurate reconstruction in hair regions.
To tackle this problem, we propose a prior-guided implicit neural rendering network.
The utilization of these priors can improve the reconstruction accuracy and robustness, leading to a high-quality integrated 3D head model.
arXiv Detail & Related papers (2021-07-09T07:43:56Z) - HeadGAN: One-shot Neural Head Synthesis and Editing [70.30831163311296]
HeadGAN is a system that synthesises on 3D face representations and adapted to the facial geometry of any reference image.
The 3D face representation enables HeadGAN to be further used as an efficient method for compression and reconstruction and a tool for expression and pose editing.
arXiv Detail & Related papers (2020-12-15T12:51:32Z) - i3DMM: Deep Implicit 3D Morphable Model of Human Heads [115.19943330455887]
We present the first deep implicit 3D morphable model (i3DMM) of full heads.
It not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair.
We show the merits of i3DMM using ablation studies, comparisons to state-of-the-art models, and applications such as semantic head editing and texture transfer.
arXiv Detail & Related papers (2020-11-28T15:01:53Z) - Deep 3D Portrait from a Single Image [54.634207317528364]
We present a learning-based approach for recovering the 3D geometry of human head from a single portrait image.
A two-step geometry learning scheme is proposed to learn 3D head reconstruction from in-the-wild face images.
We evaluate the accuracy of our method both in 3D and with pose manipulation tasks on 2D images.
arXiv Detail & Related papers (2020-04-24T08:55:37Z) - Real-time Simultaneous 3D Head Modeling and Facial Motion Capture with
an RGB-D camera [2.3260877354419254]
We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera.
Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning.
arXiv Detail & Related papers (2020-04-22T13:22:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.