Fake It Till You Make It: Face analysis in the wild using synthetic data
alone
- URL: http://arxiv.org/abs/2109.15102v1
- Date: Thu, 30 Sep 2021 13:07:04 GMT
- Title: Fake It Till You Make It: Face analysis in the wild using synthetic data
alone
- Authors: Erroll Wood, Tadas Baltru\v{s}aitis, Charlie Hewitt, Sebastian
Dziadzio, Matthew Johnson, Virginia Estellers, Thomas J. Cashman, Jamie
Shotton
- Abstract summary: We show that it is possible to perform face-related computer vision in the wild using synthetic data alone.
We describe how to combine a procedurally-generated 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism.
- Score: 9.081019005437309
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We demonstrate that it is possible to perform face-related computer vision in
the wild using synthetic data alone. The community has long enjoyed the
benefits of synthesizing training data with graphics, but the domain gap
between real and synthetic data has remained a problem, especially for human
faces. Researchers have tried to bridge this gap with data mixing, domain
adaptation, and domain-adversarial training, but we show that it is possible to
synthesize data with minimal domain gap, so that models trained on synthetic
data generalize to real in-the-wild datasets. We describe how to combine a
procedurally-generated parametric 3D face model with a comprehensive library of
hand-crafted assets to render training images with unprecedented realism and
diversity. We train machine learning systems for face-related tasks such as
landmark localization and face parsing, showing that synthetic data can both
match real data in accuracy as well as open up new approaches where manual
labelling would be impossible.
Related papers
- Unveiling Synthetic Faces: How Synthetic Datasets Can Expose Real Identities [22.8742248559748]
We show that in 6 state-of-the-art synthetic face recognition datasets, several samples from the original real dataset are leaked.
This paper is the first work which shows the leakage from training data of generator models into the generated synthetic face recognition datasets.
arXiv Detail & Related papers (2024-10-31T15:17:14Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Massively Annotated Datasets for Assessment of Synthetic and Real Data in Face Recognition [0.2775636978045794]
We study the drift between the performance of models trained on real and synthetic datasets.
We conduct studies on the differences between real and synthetic datasets on the attribute set.
Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.
arXiv Detail & Related papers (2024-04-23T17:10:49Z) - SDFR: Synthetic Data for Face Recognition Competition [51.9134406629509]
Large-scale face recognition datasets are collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns.
Recently several works proposed generating synthetic face recognition datasets to mitigate concerns in web-crawled face recognition datasets.
This paper presents the summary of the Synthetic Data for Face Recognition (SDFR) Competition held in conjunction with the 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024)
The SDFR competition was split into two tasks, allowing participants to train face recognition systems using new synthetic datasets and/or existing ones.
arXiv Detail & Related papers (2024-04-06T10:30:31Z) - Learning Human Action Recognition Representations Without Real Humans [66.61527869763819]
We present a benchmark that leverages real-world videos with humans removed and synthetic data containing virtual humans to pre-train a model.
We then evaluate the transferability of the representation learned on this data to a diverse set of downstream action recognition benchmarks.
Our approach outperforms previous baselines by up to 5%.
arXiv Detail & Related papers (2023-11-10T18:38:14Z) - SynthDistill: Face Recognition with Knowledge Distillation from
Synthetic Data [8.026313049094146]
State-of-the-art face recognition networks are often computationally expensive and cannot be used for mobile applications.
We propose a new framework to train lightweight face recognition models by distilling the knowledge of a pretrained teacher face recognition model using synthetic data.
We use synthetic face images without identity labels, mitigating the problems in the intra-class variation generation of synthetic datasets.
arXiv Detail & Related papers (2023-08-28T19:15:27Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Procedural Humans for Computer Vision [1.9550079119934403]
We build a parametric model of the face and body, including articulated hands, to generate realistic images of humans based on this body model.
We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety.
arXiv Detail & Related papers (2023-01-03T15:44:48Z) - Towards 3D Scene Understanding by Referring Synthetic Models [65.74211112607315]
Methods typically alleviate on-extensive annotations on real scene scans.
We explore how synthetic models rely on real scene categories of synthetic features to a unified feature space.
Experiments show that our method achieves the average mAP of 46.08% on the ScanNet S3DIS dataset and 55.49% by learning datasets.
arXiv Detail & Related papers (2022-03-20T13:06:15Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Synthetic Data for Model Selection [2.4499092754102874]
We show that synthetic data can be beneficial for model selection.
We introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain.
arXiv Detail & Related papers (2021-05-03T09:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.