DigiFace-1M: 1 Million Digital Face Images for Face Recognition
- URL: http://arxiv.org/abs/2210.02579v1
- Date: Wed, 5 Oct 2022 22:02:48 GMT
- Title: DigiFace-1M: 1 Million Digital Face Images for Face Recognition
- Authors: Gwangbin Bae, Martin de La Gorce, Tadas Baltrusaitis, Charlie Hewitt,
Dong Chen, Julien Valentin, Roberto Cipolla, Jingjing Shen
- Abstract summary: State-of-the-art face recognition models show impressive accuracy, achieving over 99.8% on Labeled Faces in the Wild dataset.
We introduce a large-scale synthetic dataset for face recognition, obtained by rendering digital faces using a computer graphics pipeline.
- Score: 25.31469201712699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art face recognition models show impressive accuracy, achieving
over 99.8% on Labeled Faces in the Wild (LFW) dataset. Such models are trained
on large-scale datasets that contain millions of real human face images
collected from the internet. Web-crawled face images are severely biased (in
terms of race, lighting, make-up, etc) and often contain label noise. More
importantly, the face images are collected without explicit consent, raising
ethical concerns. To avoid such problems, we introduce a large-scale synthetic
dataset for face recognition, obtained by rendering digital faces using a
computer graphics pipeline. We first demonstrate that aggressive data
augmentation can significantly reduce the synthetic-to-real domain gap. Having
full control over the rendering pipeline, we also study how each attribute
(e.g., variation in facial pose, accessories and textures) affects the
accuracy. Compared to SynFace, a recent method trained on GAN-generated
synthetic faces, we reduce the error rate on LFW by 52.5% (accuracy from 91.93%
to 96.17%). By fine-tuning the network on a smaller number of real face images
that could reasonably be obtained with consent, we achieve accuracy that is
comparable to the methods trained on millions of real face images.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis [71.40724659748787]
DiffusionFace is the first diffusion-based face forgery dataset.
It covers various forgery categories, including unconditional and Text Guide facial image generation, Img2Img, Inpaint, and Diffusion-based facial exchange algorithms.
It provides essential metadata and a real-world internet-sourced forgery facial image dataset for evaluation.
arXiv Detail & Related papers (2024-03-27T11:32:44Z) - FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face
Extraction [3.8502825594372703]
Occlusions often occur in face images in the wild, troubling face-related tasks such as landmark detection, 3D reconstruction, and face recognition.
This paper proposes a novel face segmentation dataset with manually labeled face occlusions from the CelebA-HQ and the internet.
We trained a straightforward face segmentation model but obtained SOTA performance, convincingly demonstrating the effectiveness of the proposed dataset.
arXiv Detail & Related papers (2022-01-20T19:44:18Z) - FaceEraser: Removing Facial Parts for Augmented Reality [10.575917056215289]
Our task is to remove all facial parts and then impose visual elements onto the blank'' face for augmented reality.
We propose a novel data generation technique to produce paired training data that well mimic the blank'' faces.
Our method has been integrated into commercial products and its effectiveness has been verified with unconstrained user inputs.
arXiv Detail & Related papers (2021-09-22T14:30:12Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - FedFace: Collaborative Learning of Face Recognition Model [66.84737075622421]
FedFace is a framework for collaborative learning of face recognition models.
It learns an accurate and generalizable face recognition model where the face images stored at each client are neither shared with other clients nor the central host.
Our code and pre-trained models will be publicly available.
arXiv Detail & Related papers (2021-04-07T09:25:32Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.