A high fidelity synthetic face framework for computer vision
- URL: http://arxiv.org/abs/2007.08364v1
- Date: Thu, 16 Jul 2020 14:40:28 GMT
- Title: A high fidelity synthetic face framework for computer vision
- Authors: Tadas Baltrusaitis, Erroll Wood, Virginia Estellers, Charlie Hewitt,
Sebastian Dziadzio, Marek Kowalski, Matthew Johnson, Thomas J. Cashman, and
Jamie Shotton
- Abstract summary: We propose synthesizing facial data, including ground truth annotations, at the consistency and scale possible through use of synthetic data.
We use a parametric face model together with hand crafted assets which enable us to generate training data with unprecedented quality and diversity.
- Score: 10.679578971210912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analysis of faces is one of the core applications of computer vision, with
tasks ranging from landmark alignment, head pose estimation, expression
recognition, and face recognition among others. However, building reliable
methods requires time-consuming data collection and often even more
time-consuming manual annotation, which can be unreliable. In our work we
propose synthesizing such facial data, including ground truth annotations that
would be almost impossible to acquire through manual annotation at the
consistency and scale possible through use of synthetic data. We use a
parametric face model together with hand crafted assets which enable us to
generate training data with unprecedented quality and diversity (varying shape,
texture, expression, pose, lighting, and hair).
Related papers
- Synthetic Counterfactual Faces [1.3062016289815055]
We build a generative AI framework to construct targeted, counterfactual, high-quality synthetic face data.
Our pipeline has many use cases, including face recognition systems sensitivity evaluations and image understanding system probes.
We showcase the efficacy of our face generation pipeline on a leading commercial vision model.
arXiv Detail & Related papers (2024-07-18T22:22:49Z) - SDFR: Synthetic Data for Face Recognition Competition [51.9134406629509]
Large-scale face recognition datasets are collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns.
Recently several works proposed generating synthetic face recognition datasets to mitigate concerns in web-crawled face recognition datasets.
This paper presents the summary of the Synthetic Data for Face Recognition (SDFR) Competition held in conjunction with the 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024)
The SDFR competition was split into two tasks, allowing participants to train face recognition systems using new synthetic datasets and/or existing ones.
arXiv Detail & Related papers (2024-04-06T10:30:31Z) - If It's Not Enough, Make It So: Reducing Authentic Data Demand in Face Recognition through Synthetic Faces [16.977459035497162]
Large face datasets are primarily sourced from web-based images, lacking explicit user consent.
In this paper, we examine whether and how synthetic face data can be used to train effective face recognition models.
arXiv Detail & Related papers (2024-04-04T15:45:25Z) - FaceXFormer: A Unified Transformer for Facial Analysis [59.94066615853198]
FaceXformer is an end-to-end unified transformer model for a range of facial analysis tasks.
Our model effectively handles images "in-the-wild," demonstrating its robustness and generalizability across eight different tasks.
arXiv Detail & Related papers (2024-03-19T17:58:04Z) - Face Recognition Using Synthetic Face Data [0.0]
We highlight the promising application of synthetic data, generated through rendering digital faces via our computer graphics pipeline, in achieving competitive results.
By finetuning the model,we obtain results that rival those achieved when training with hundreds of thousands of real images.
We also investigate the contribution of adding intra-class variance factors (e.g., makeup, accessories, haircuts) on model performance.
arXiv Detail & Related papers (2023-05-17T09:26:10Z) - Procedural Humans for Computer Vision [1.9550079119934403]
We build a parametric model of the face and body, including articulated hands, to generate realistic images of humans based on this body model.
We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety.
arXiv Detail & Related papers (2023-01-03T15:44:48Z) - Can Shadows Reveal Biometric Information? [48.3561395627331]
We show that the biometric information leakage from shadows can be sufficient for reliable identity inference under representative scenarios.
We then develop a learning-based method that demonstrates this phenomenon in real settings.
arXiv Detail & Related papers (2022-09-21T02:36:32Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Fake It Till You Make It: Face analysis in the wild using synthetic data
alone [9.081019005437309]
We show that it is possible to perform face-related computer vision in the wild using synthetic data alone.
We describe how to combine a procedurally-generated 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism.
arXiv Detail & Related papers (2021-09-30T13:07:04Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Monocular Real-time Full Body Capture with Inter-part Correlations [66.22835689189237]
We present the first method for real-time full body capture that estimates shape and motion of body and hands together with a dynamic 3D face model from a single color image.
Our approach uses a new neural network architecture that exploits correlations between body and hands at high computational efficiency.
arXiv Detail & Related papers (2020-12-11T02:37:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.