4D Agnostic Real-Time Facial Animation Pipeline for Desktop Scenarios
- URL: http://arxiv.org/abs/2304.02814v1
- Date: Thu, 6 Apr 2023 01:32:58 GMT
- Title: 4D Agnostic Real-Time Facial Animation Pipeline for Desktop Scenarios
- Authors: Wei Chen and HongWei Xu and Jelo Wang
- Abstract summary: We present a high-precision real-time facial animation pipeline suitable for animators to use on their desktops.
System enables animators to create high-quality facial animations with ease and speed.
Our approach has the potential to revolutionize the way facial animation is done in the entertainment industry.
- Score: 8.274472944075713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a high-precision real-time facial animation pipeline suitable for
animators to use on their desktops. This pipeline is about to be launched in
FACEGOOD's Avatary\footnote{https://www.avatary.com/} software, which will
accelerate animators' productivity. The pipeline differs from professional
head-mounted facial capture solutions in that it only requires the use of a
consumer-grade 3D camera on the desk to achieve high-precision real-time facial
capture. The system enables animators to create high-quality facial animations
with ease and speed, while reducing the cost and complexity of traditional
facial capture solutions. Our approach has the potential to revolutionize the
way facial animation is done in the entertainment industry.
Related papers
- Bring Your Own Character: A Holistic Solution for Automatic Facial
Animation Generation of Customized Characters [24.615066741391125]
We propose a holistic solution to automatically animate virtual human faces.
A deep learning model was first trained to retarget the facial expression from input face images to virtual human faces.
A practical toolkit was developed using Unity 3D, making it compatible with the most popular VR applications.
arXiv Detail & Related papers (2024-02-21T11:35:20Z) - Attention-Based VR Facial Animation with Visual Mouth Camera Guidance
for Immersive Telepresence Avatars [19.70403947793871]
We present a hybrid method that uses both keypoints and direct visual guidance from a mouth camera.
Our method generalizes to unseen operators and requires only a quick enrolment step with capture of two short videos.
We highlight how the facial animation contributed to our victory at the ANA Avatar XPRIZE Finals.
arXiv Detail & Related papers (2023-12-15T12:45:11Z) - Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD
Space [38.940128217895115]
We propose Versatile Face Animator, which combines facial motion capture with motion in an end-to-end manner, eliminating the need for blendshapes or rigs.
Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters.
Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results.
arXiv Detail & Related papers (2023-08-11T11:29:01Z) - Audio-Driven Talking Face Generation with Diverse yet Realistic Facial
Animations [61.65012981435094]
DIRFA is a novel method that can generate talking faces with diverse yet realistic facial animations from the same driving audio.
To accommodate fair variation of plausible facial animations for the same audio, we design a transformer-based probabilistic mapping network.
We show that DIRFA can generate talking faces with realistic facial animations effectively.
arXiv Detail & Related papers (2023-04-18T12:36:15Z) - FNeVR: Neural Volume Rendering for Face Animation [53.92664037596834]
We propose a Face Neural Volume Rendering (FNeVR) network to explore the potential of 2D motion warping and 3D volume rendering.
In FNeVR, we design a 3D Face Volume Rendering (FVR) module to enhance the facial details for image rendering.
We also design a lightweight pose editor, enabling FNeVR to edit the facial pose in a simple yet effective way.
arXiv Detail & Related papers (2022-09-21T13:18:59Z) - High-Quality Real Time Facial Capture Based on Single Camera [0.0]
We train a convolutional neural network to produce high-quality continuous blendshape weight output from video training.
We demonstrate compelling animation inference in challenging areas such as eyes and lips.
arXiv Detail & Related papers (2021-11-15T06:42:27Z) - MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement [142.9900055577252]
We propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.
Our approach ensures highly accurate lip motion, while also plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.
arXiv Detail & Related papers (2021-04-16T17:05:40Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z) - Audio- and Gaze-driven Facial Animation of Codec Avatars [149.0094713268313]
We describe the first approach to animate Codec Avatars in real-time using audio and/or eye tracking.
Our goal is to display expressive conversations between individuals that exhibit important social signals.
arXiv Detail & Related papers (2020-08-11T22:28:48Z) - A Robust Interactive Facial Animation Editing System [0.0]
We propose a new learning-based approach to easily edit a facial animation from a set of intuitive control parameters.
We use a resolution-preserving fully convolutional neural network that maps control parameters to blendshapes coefficients sequences.
The proposed system is robust and can handle coarse, exaggerated edits from non-specialist users.
arXiv Detail & Related papers (2020-07-18T08:31:02Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.