Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces
- URL: http://arxiv.org/abs/2210.03529v1
- Date: Wed, 5 Oct 2022 18:00:13 GMT
- Title: Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces
- Authors: Chirag Raman, Charlie Hewitt, Erroll Wood, Tadas Baltrusaitis
- Abstract summary: We boost the realism of our synthetic faces by introducing dynamic skin wrinkles in response to facial expressions.
Our key contribution is an approach that produces realistic wrinkles across a large and diverse population of digital humans.
We also introduce the 300W-winks evaluation subset and the Pexels dataset of closed eyes and winks.
- Score: 6.098254376499899
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in synthesizing realistic faces have shown that synthetic
training data can replace real data for various face-related computer vision
tasks. A question arises: how important is realism? Is the pursuit of
photorealism excessive? In this work, we show otherwise. We boost the realism
of our synthetic faces by introducing dynamic skin wrinkles in response to
facial expressions and observe significant performance improvements in
downstream computer vision tasks. Previous approaches for producing such
wrinkles either required prohibitive artist effort to scale across identities
and expressions or were not capable of reconstructing high-frequency skin
details with sufficient fidelity. Our key contribution is an approach that
produces realistic wrinkles across a large and diverse population of digital
humans. Concretely, we formalize the concept of mesh-tension and use it to
aggregate possible wrinkles from high-quality expression scans into albedo and
displacement texture maps. At synthesis, we use these maps to produce wrinkles
even for expressions not represented in the source scans. Additionally, to
provide a more nuanced indicator of model performance under deformations
resulting from compressed expressions, we introduce the 300W-winks evaluation
subset and the Pexels dataset of closed eyes and winks.
Related papers
- Digi2Real: Bridging the Realism Gap in Synthetic Data Face Recognition via Foundation Models [4.910937238451485]
We introduce a novel framework for realism transfer aimed at enhancing the realism of synthetically generated face images.
By integrating the controllable aspects of the graphics pipeline with our realism enhancement technique, we generate a large amount of realistic variations.
arXiv Detail & Related papers (2024-11-04T15:42:22Z) - Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures [33.463245327698]
We present a novel volumetric prior on human faces that allows for high-fidelity expressive face modeling.
We leverage a 3D Morphable Face Model to synthesize a large training set, rendering each identity with different expressions.
We then train a conditional Neural Radiance Field prior on this synthetic dataset and, at inference time, fine-tune the model on a very sparse set of real images of a single subject.
arXiv Detail & Related papers (2024-10-01T12:24:50Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Neural Face Models for Example-Based Visual Speech Synthesis [2.2817442144155207]
We present a marker-less approach for facial motion capture based on multi-view video.
We learn a neural representation of facial expressions, which is used to seamlessly facial performances during the animation procedure.
arXiv Detail & Related papers (2020-09-22T07:35:33Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.