Dynamic Facial Asset and Rig Generation from a Single Scan
- URL: http://arxiv.org/abs/2010.00560v2
- Date: Mon, 5 Oct 2020 20:12:53 GMT
- Title: Dynamic Facial Asset and Rig Generation from a Single Scan
- Authors: Jiaman Li, Zhengfei Kuang, Yajie Zhao, Mingming He, Karl Bladin and
Hao Li
- Abstract summary: We propose a framework for the automatic generation of high-quality dynamic facial assets.
Our framework takes a single scan as input to generate a set of personalized blendshapes, dynamic and physically-based textures, as well as secondary facial components.
- Score: 17.202189917030033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of high-fidelity computer-generated (CG) characters used in film
and gaming requires intensive manual labor and a comprehensive set of facial
assets to be captured with complex hardware, resulting in high cost and long
production cycles. In order to simplify and accelerate this digitization
process, we propose a framework for the automatic generation of high-quality
dynamic facial assets, including rigs which can be readily deployed for artists
to polish. Our framework takes a single scan as input to generate a set of
personalized blendshapes, dynamic and physically-based textures, as well as
secondary facial components (e.g., teeth and eyeballs). Built upon a facial
database consisting of pore-level details, with over $4,000$ scans of varying
expressions and identities, we adopt a self-supervised neural network to learn
personalized blendshapes from a set of template expressions. We also model the
joint distribution between identities and expressions, enabling the inference
of the full set of personalized blendshapes with dynamic appearances from a
single neutral input scan. Our generated personalized face rig assets are
seamlessly compatible with cutting-edge industry pipelines for facial animation
and rendering. We demonstrate that our framework is robust and effective by
inferring on a wide range of novel subjects, and illustrate compelling
rendering results while animating faces with generated customized
physically-based dynamic textures.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation [79.99551055245071]
We propose VividPose, an end-to-end pipeline that ensures superior temporal stability.
An identity-aware appearance controller integrates additional facial information without compromising other appearance details.
A geometry-aware pose controller utilizes both dense rendering maps from SMPL-X and sparse skeleton maps.
VividPose exhibits superior generalization capabilities on our proposed in-the-wild dataset.
arXiv Detail & Related papers (2024-05-28T13:18:32Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces [21.946327323788275]
3D rendering of dynamic face is a challenging problem.
We present a novel representation that enables high-quality rendering of an actor's dynamic facial performances.
arXiv Detail & Related papers (2024-04-22T00:44:13Z) - FitMe: Deep Photorealistic 3D Morphable Model Avatars [119.03325450951074]
We introduce FitMe, a facial reflectance model and a differentiable rendering pipeline.
FitMe achieves state-of-the-art reflectance acquisition and identity preservation on single "in-the-wild" facial images.
In contrast with recent implicit avatar reconstructions, FitMe requires only one minute and produces relightable mesh and texture-based avatars.
arXiv Detail & Related papers (2023-05-16T17:42:45Z) - Human Performance Modeling and Rendering via Neural Animated Mesh [40.25449482006199]
We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
arXiv Detail & Related papers (2022-09-18T03:58:00Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Video-driven Neural Physically-based Facial Asset for Production [33.24654834163312]
We present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.
Our technique provides higher accuracy and visual fidelity than previous video-driven facial reconstruction and animation methods.
arXiv Detail & Related papers (2022-02-11T13:22:48Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z) - CONFIG: Controllable Neural Face Image Generation [10.443563719622645]
ConfigNet is a neural face model that allows for controlling individual aspects of output images in meaningful ways.
Our novel method uses synthetic data to factorize the latent space into elements that correspond to the inputs of a traditional rendering pipeline.
arXiv Detail & Related papers (2020-05-06T09:19:46Z) - Learning Formation of Physically-Based Face Attributes [16.55993873730069]
Based on a combined data set of 4000 high resolution facial scans, we introduce a non-linear morphable face model.
Our deep learning based generative model learns to correlate albedo and geometry, which ensures the anatomical correctness of the generated assets.
arXiv Detail & Related papers (2020-04-02T07:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.