Real-time Simultaneous 3D Head Modeling and Facial Motion Capture with
an RGB-D camera
- URL: http://arxiv.org/abs/2004.10557v1
- Date: Wed, 22 Apr 2020 13:22:21 GMT
- Title: Real-time Simultaneous 3D Head Modeling and Facial Motion Capture with
an RGB-D camera
- Authors: Diego Thomas
- Abstract summary: We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera.
Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning.
- Score: 2.3260877354419254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method to build in real-time animated 3D head models using a
consumer-grade RGB-D camera. Our proposed method is the first one to provide
simultaneously comprehensive facial motion tracking and a detailed 3D model of
the user's head. Anyone's head can be instantly reconstructed and his facial
motion captured without requiring any training or pre-scanning. The user starts
facing the camera with a neutral expression in the first frame, but is free to
move, talk and change his face expression as he wills otherwise. The facial
motion is captured using a blendshape animation model while geometric details
are captured using a Deviation image mapped over the template mesh. We
contribute with an efficient algorithm to grow and refine the deforming 3D
model of the head on-the-fly and in real-time. We demonstrate robust and
high-fidelity simultaneous facial motion capture and 3D head modeling results
on a wide range of subjects with various head poses and facial expressions.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis [88.17520303867099]
One-shot 3D talking portrait generation aims to reconstruct a 3D avatar from an unseen image, and then animate it with a reference video or audio.
We present Real3D-Potrait, a framework that improves the one-shot 3D reconstruction power with a large image-to-plane model.
Experiments show that Real3D-Portrait generalizes well to unseen identities and generates more realistic talking portrait videos.
arXiv Detail & Related papers (2024-01-16T17:04:30Z) - HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs [9.790185628415301]
We introduce a generative model for detailed 3D head meshes on top of an articulated 3DMM.
We train a StyleGAN model in order to generalize over the UV maps of displacements.
We demonstrate the results of unconditional generation and fitting to the full or partial observation.
arXiv Detail & Related papers (2023-12-21T18:57:52Z) - Controllable Dynamic Appearance for Neural 3D Portraits [54.29179484318194]
We propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions.
CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
arXiv Detail & Related papers (2023-09-20T02:24:40Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD
Space [38.940128217895115]
We propose Versatile Face Animator, which combines facial motion capture with motion in an end-to-end manner, eliminating the need for blendshapes or rigs.
Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters.
Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results.
arXiv Detail & Related papers (2023-08-11T11:29:01Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - Generating Animatable 3D Cartoon Faces from Single Portraits [51.15618892675337]
We present a novel framework to generate animatable 3D cartoon faces from a single portrait image.
We propose a two-stage reconstruction method to recover the 3D cartoon face with detailed texture.
Finally, we propose a semantic preserving face rigging method based on manually created templates and deformation transfer.
arXiv Detail & Related papers (2023-07-04T04:12:50Z) - Single-Camera 3D Head Fitting for Mixed Reality Clinical Applications [41.63137498124499]
Our goal is to reconstruct the head model of each person to enable future mixed reality applications.
We recover a dense 3D reconstruction and camera information via structure-from-motion and multi-view stereo.
These are then used in a new two-stage fitting process to recover the 3D head shape.
arXiv Detail & Related papers (2021-09-06T21:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.