Neuromuscular Control of the Face-Head-Neck Biomechanical Complex With
Learning-Based Expression Transfer From Images and Videos
- URL: http://arxiv.org/abs/2111.06517v1
- Date: Fri, 12 Nov 2021 01:13:07 GMT
- Title: Neuromuscular Control of the Face-Head-Neck Biomechanical Complex With
Learning-Based Expression Transfer From Images and Videos
- Authors: Xiao S. Zeng, Surya Dwarakanath, Wuyue Lu, Masaki Nakada, Demetri
Terzopoulos
- Abstract summary: The transfer of facial expressions from people to 3D face models is a classic computer graphics problem.
We present a novel, learning-based approach to transferring facial expressions to a biomechanical model.
- Score: 13.408753449508326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The transfer of facial expressions from people to 3D face models is a classic
computer graphics problem. In this paper, we present a novel, learning-based
approach to transferring facial expressions and head movements from images and
videos to a biomechanical model of the face-head-neck complex. Leveraging the
Facial Action Coding System (FACS) as an intermediate representation of the
expression space, we train a deep neural network to take in FACS Action Units
(AUs) and output suitable facial muscle and jaw activation signals for the
musculoskeletal model. Through biomechanical simulation, the activations deform
the facial soft tissues, thereby transferring the expression to the model. Our
approach has advantages over previous approaches. First, the facial expressions
are anatomically consistent as our biomechanical model emulates the relevant
anatomy of the face, head, and neck. Second, by training the neural network
using data generated from the biomechanical model itself, we eliminate the
manual effort of data collection for expression transfer. The success of our
approach is demonstrated through experiments involving the transfer onto our
face-head-neck model of facial expressions and head poses from a range of
facial images and videos.
Related papers
- Learning a Generalized Physical Face Model From Data [20.432913500642417]
We propose a generalized physical face model that we learn from a large 3D face dataset in a simulation-free manner.
Our model can be quickly fit to any unseen identity and produce a ready-to-animate physical face model automatically.
All the while, the resulting animations allow for physical effects like collision avoidance, gravity, paralysis, bone reshaping and more.
arXiv Detail & Related papers (2024-02-29T18:59:31Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - Pose Guided Human Image Synthesis with Partially Decoupled GAN [25.800174118151638]
Pose Guided Human Image Synthesis (PGHIS) is a challenging task of transforming a human image from the reference pose to a target pose.
We propose a method by decoupling the human body into several parts to guide the synthesis of a realistic image of the person.
In addition, we design a multi-head attention-based module for PGHIS.
arXiv Detail & Related papers (2022-10-07T15:31:37Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Head2Head++: Deep Facial Attributes Re-Targeting [6.230979482947681]
We leverage the 3D geometry of faces and Generative Adversarial Networks (GANs) to design a novel deep learning architecture for the task of facial and head reenactment.
We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos.
Our system performs end-to-end reenactment in nearly real-time speed (18 fps)
arXiv Detail & Related papers (2020-06-17T23:38:37Z) - Head2Head: Video-based Neural Head Synthesis [50.32988828989691]
We propose a novel machine learning architecture for facial reenactment.
We show that the proposed method can transfer facial expressions, pose and gaze of a source actor to a target video in a photo-realistic fashion more accurately than state-of-the-art methods.
arXiv Detail & Related papers (2020-05-22T00:44:43Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.