SASMU: boost the performance of generalized recognition model using
synthetic face dataset
- URL: http://arxiv.org/abs/2306.01449v1
- Date: Fri, 2 Jun 2023 11:11:00 GMT
- Title: SASMU: boost the performance of generalized recognition model using
synthetic face dataset
- Authors: Chia-Chun Chung, Pei-Chun Chang, Yong-Sheng Chen, HaoYuan He, Chinson
Yeh
- Abstract summary: We propose SASMU, a simple, novel, and effective method for face recognition using a synthetic dataset.
Our proposed method consists of spatial data augmentation (SA) and spectrum mixup (SMU)
- Score: 5.596292759115785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, deploying a robust face recognition product becomes easy with the
development of face recognition techniques for decades. Not only profile image
verification but also the state-of-the-art method can handle the in-the-wild
image almost perfectly. However, the concern of privacy issues raise rapidly
since mainstream research results are powered by tons of web-crawled data,
which faces the privacy invasion issue. The community tries to escape this
predicament completely by training the face recognition model with synthetic
data but faces severe domain gap issues, which still need to access real images
and identity labels to fine-tune the model. In this paper, we propose SASMU, a
simple, novel, and effective method for face recognition using a synthetic
dataset. Our proposed method consists of spatial data augmentation (SA) and
spectrum mixup (SMU). We first analyze the existing synthetic datasets for
developing a face recognition system. Then, we reveal that heavy data
augmentation is helpful for boosting performance when using synthetic data. By
analyzing the previous frequency mixup studies, we proposed a novel method for
domain generalization. Extensive experimental results have demonstrated the
effectiveness of SASMU, achieving state-of-the-art performance on several
common benchmarks, such as LFW, AgeDB-30, CA-LFW, CFP-FP, and CP-LFW.
Related papers
- SDFR: Synthetic Data for Face Recognition Competition [51.9134406629509]
Large-scale face recognition datasets are collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns.
Recently several works proposed generating synthetic face recognition datasets to mitigate concerns in web-crawled face recognition datasets.
This paper presents the summary of the Synthetic Data for Face Recognition (SDFR) Competition held in conjunction with the 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024)
The SDFR competition was split into two tasks, allowing participants to train face recognition systems using new synthetic datasets and/or existing ones.
arXiv Detail & Related papers (2024-04-06T10:30:31Z) - If It's Not Enough, Make It So: Reducing Authentic Data Demand in Face Recognition through Synthetic Faces [16.977459035497162]
Large face datasets are primarily sourced from web-based images, lacking explicit user consent.
In this paper, we examine whether and how synthetic face data can be used to train effective face recognition models.
arXiv Detail & Related papers (2024-04-04T15:45:25Z) - Unsupervised Face Recognition using Unlabeled Synthetic Data [16.494722503803196]
We propose an unsupervised face recognition model based on unlabeled synthetic data (U SynthFace)
Our proposed U SynthFace learns to maximize the similarity between two augmented images of the same synthetic instance.
We prove the effectiveness of our U SynthFace in achieving relatively high recognition accuracies using unlabeled synthetic data.
arXiv Detail & Related papers (2022-11-14T14:05:19Z) - How to Boost Face Recognition with StyleGAN? [13.067766076889995]
State-of-the-art face recognition systems require vast amounts of labeled training data.
Self-supervised revolution in the industry motivates research on the adaptation of related techniques to facial recognition.
We show that a simple approach based on fine-tuning pSp encoder for StyleGAN allows us to improve upon the state-of-the-art facial recognition.
arXiv Detail & Related papers (2022-10-18T18:41:56Z) - SFace: Privacy-friendly and Accurate Face Recognition using Synthetic
Data [9.249824128880707]
We propose and investigate the feasibility of using a privacy-friendly synthetically generated face dataset to train face recognition models.
To address the privacy aspect of using such data to train a face recognition model, we provide extensive evaluation experiments on the identity relation between the synthetic dataset and the original authentic dataset used to train the generative model.
We also propose to train face recognition on our privacy-friendly dataset, SFace, using three different learning strategies, multi-class classification, label-free knowledge transfer, and combined learning of multi-class classification and knowledge transfer.
arXiv Detail & Related papers (2022-06-21T16:42:04Z) - Escaping Data Scarcity for High-Resolution Heterogeneous Face
Hallucination [68.78903256687697]
In Heterogeneous Face Recognition (HFR), the objective is to match faces across two different domains such as visible and thermal.
Recent methods attempting to fill the gap via synthesis have achieved promising results, but their performance is still limited by the scarcity of paired training data.
In this paper, we propose a new face hallucination paradigm for HFR, which not only enables data-efficient synthesis but also allows to scale up model training without breaking any privacy policy.
arXiv Detail & Related papers (2022-03-30T20:44:33Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Federated Face Recognition [30.344709613627764]
Federated Learning is proposed to train a model cooperatively without sharing data between parties.
This paper proposes a framework named FedFace to innovate federated learning for face recognition.
arXiv Detail & Related papers (2021-05-06T08:07:25Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.