Learning a Generalized Physical Face Model From Data
- URL: http://arxiv.org/abs/2402.19477v1
- Date: Thu, 29 Feb 2024 18:59:31 GMT
- Title: Learning a Generalized Physical Face Model From Data
- Authors: Lingchen Yang, Gaspard Zoss, Prashanth Chandran, Markus Gross, Barbara
Solenthaler, Eftychios Sifakis, Derek Bradley
- Abstract summary: We propose a generalized physical face model that we learn from a large 3D face dataset in a simulation-free manner.
Our model can be quickly fit to any unseen identity and produce a ready-to-animate physical face model automatically.
All the while, the resulting animations allow for physical effects like collision avoidance, gravity, paralysis, bone reshaping and more.
- Score: 20.432913500642417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physically-based simulation is a powerful approach for 3D facial animation as
the resulting deformations are governed by physical constraints, allowing to
easily resolve self-collisions, respond to external forces and perform
realistic anatomy edits. Today's methods are data-driven, where the actuations
for finite elements are inferred from captured skin geometry. Unfortunately,
these approaches have not been widely adopted due to the complexity of
initializing the material space and learning the deformation model for each
character separately, which often requires a skilled artist followed by lengthy
network training. In this work, we aim to make physics-based facial animation
more accessible by proposing a generalized physical face model that we learn
from a large 3D face dataset in a simulation-free manner. Once trained, our
model can be quickly fit to any unseen identity and produce a ready-to-animate
physical face model automatically. Fitting is as easy as providing a single 3D
face scan, or even a single face image. After fitting, we offer intuitive
animation controls, as well as the ability to retarget animations across
characters. All the while, the resulting animations allow for physical effects
like collision avoidance, gravity, paralysis, bone reshaping and more.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - AnimeCeleb: Large-Scale Animation CelebFaces Dataset via Controllable 3D
Synthetic Models [19.6347170450874]
We present a large-scale animation celebfaces dataset (AnimeCeleb) via controllable synthetic animation models.
To facilitate the data generation process, we build a semi-automatic pipeline based on an open 3D software.
This leads to constructing a large-scale animation face dataset that includes multi-pose and multi-style animation faces with rich annotations.
arXiv Detail & Related papers (2021-11-15T10:00:06Z) - SAFA: Structure Aware Face Animation [9.58882272014749]
We propose a structure aware face animation (SAFA) method which constructs specific geometric structures to model different components of a face image.
We use a 3D morphable model (3DMM) to model the face, multiple affine transforms to model the other foreground components like hair and beard, and an identity transform to model the background.
The 3DMM geometric embedding not only helps generate realistic structure for the driving scene, but also contributes to better perception of occluded area in the generated image.
arXiv Detail & Related papers (2021-11-09T03:22:38Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Unsupervised Shape and Pose Disentanglement for 3D Meshes [49.431680543840706]
We present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting.
We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes.
We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval.
arXiv Detail & Related papers (2020-07-22T11:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.