Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD
Space
- URL: http://arxiv.org/abs/2308.06076v1
- Date: Fri, 11 Aug 2023 11:29:01 GMT
- Title: Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD
Space
- Authors: Haoyu Wang, Haozhe Wu, Junliang Xing, Jia Jia
- Abstract summary: We propose Versatile Face Animator, which combines facial motion capture with motion in an end-to-end manner, eliminating the need for blendshapes or rigs.
Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters.
Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results.
- Score: 38.940128217895115
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Creating realistic 3D facial animation is crucial for various applications in
the movie production and gaming industry, especially with the burgeoning demand
in the metaverse. However, prevalent methods such as blendshape-based
approaches and facial rigging techniques are time-consuming, labor-intensive,
and lack standardized configurations, making facial animation production
challenging and costly. In this paper, we propose a novel self-supervised
framework, Versatile Face Animator, which combines facial motion capture with
motion retargeting in an end-to-end manner, eliminating the need for
blendshapes or rigs. Our method has the following two main characteristics: 1)
we propose an RGBD animation module to learn facial motion from raw RGBD videos
by hierarchical motion dictionaries and animate RGBD images rendered from 3D
facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D
characters regardless of their topology, textures, blendshapes, and rigs; and
2) we introduce a mesh retarget module to utilize RGBD animation to create 3D
facial animation by manipulating facial mesh with controller transformations,
which are estimated from dense optical flow fields and blended together with
geodesic-distance-based weights. Comprehensive experiments demonstrate the
effectiveness of our proposed framework in generating impressive 3D facial
animation results, highlighting its potential as a promising solution for the
cost-effective and efficient production of facial animation in the metaverse.
Related papers
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation [68.04052669266174]
We construct a large-scale multi-modal 3D facial animation dataset, MMHead.
MMHead consists of 49 hours of 3D facial motion sequences, speech audios, and rich hierarchical text annotations.
Based on the MMHead dataset, we establish benchmarks for two new tasks: text-induced 3D talking head animation and text-to-3D facial motion generation.
arXiv Detail & Related papers (2024-10-10T09:37:01Z) - G3FA: Geometry-guided GAN for Face Animation [14.488117084637631]
We introduce Geometry-guided GAN for Face Animation (G3FA) to tackle this limitation.
Our novel approach empowers the face animation model to incorporate 3D information using only 2D images.
In our face reenactment model, we leverage 2D motion warping to capture motion dynamics.
arXiv Detail & Related papers (2024-08-23T13:13:24Z) - MotionDreamer: Zero-Shot 3D Mesh Animation from Video Diffusion Models [10.263762787854862]
We propose a technique for automatic re-animation of arbitrary 3D shapes based on a motion prior extracted from a video diffusion model.
We leverage an explicit mesh-based representation compatible with existing computer-graphics pipelines.
Our time-efficient zero-shot method achieves a superior performance re-animating a diverse set of 3D shapes.
arXiv Detail & Related papers (2024-05-30T15:30:38Z) - Bring Your Own Character: A Holistic Solution for Automatic Facial
Animation Generation of Customized Characters [24.615066741391125]
We propose a holistic solution to automatically animate virtual human faces.
A deep learning model was first trained to retarget the facial expression from input face images to virtual human faces.
A practical toolkit was developed using Unity 3D, making it compatible with the most popular VR applications.
arXiv Detail & Related papers (2024-02-21T11:35:20Z) - DF-3DFace: One-to-Many Speech Synchronized 3D Face Animation with
Diffusion [68.85904927374165]
We propose DF-3DFace, a diffusion-driven speech-to-3D face mesh synthesis.
It captures the complex one-to-many relationships between speech and 3D face based on diffusion.
It simultaneously achieves more realistic facial animation than the state-of-the-art methods.
arXiv Detail & Related papers (2023-08-23T04:14:55Z) - 3D Cinemagraphy from a Single Image [73.09720823592092]
We present 3D Cinemagraphy, a new technique that marries 2D image animation with 3D photography.
Given a single still image as input, our goal is to generate a video that contains both visual content animation and camera motion.
arXiv Detail & Related papers (2023-03-10T06:08:23Z) - AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars [71.00322191446203]
2D generative models often suffer from undesirable artifacts when rendering images from different camera viewpoints.
Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations.
We propose an animatable 3D-aware GAN for multiview consistent face animation generation.
arXiv Detail & Related papers (2022-10-12T17:59:56Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.