ChildGAN: Large Scale Synthetic Child Facial Data Using Domain
Adaptation in StyleGAN
- URL: http://arxiv.org/abs/2307.13746v1
- Date: Tue, 25 Jul 2023 18:04:52 GMT
- Title: ChildGAN: Large Scale Synthetic Child Facial Data Using Domain
Adaptation in StyleGAN
- Authors: Muhammad Ali Farooq, Wang Yao, Gabriel Costache, Peter Corcoran
- Abstract summary: ChildGAN is built by performing smooth domain transfer using transfer learning.
The dataset comprises more than 300k distinct data samples.
The results demonstrate that synthetic child facial data of high quality offers an alternative to the cost and complexity of collecting a large-scale dataset from real children.
- Score: 1.6536018920603175
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this research work, we proposed a novel ChildGAN, a pair of GAN networks
for generating synthetic boys and girls facial data derived from StyleGAN2.
ChildGAN is built by performing smooth domain transfer using transfer learning.
It provides photo-realistic, high-quality data samples. A large-scale dataset
is rendered with a variety of smart facial transformations: facial expressions,
age progression, eye blink effects, head pose, skin and hair color variations,
and variable lighting conditions. The dataset comprises more than 300k distinct
data samples. Further, the uniqueness and characteristics of the rendered
facial features are validated by running different computer vision application
tests which include CNN-based child gender classifier, face localization and
facial landmarks detection test, identity similarity evaluation using ArcFace,
and lastly running eye detection and eye aspect ratio tests. The results
demonstrate that synthetic child facial data of high quality offers an
alternative to the cost and complexity of collecting a large-scale dataset from
real children.
Related papers
- ChildDiffusion: Unlocking the Potential of Generative AI and Controllable Augmentations for Child Facial Data using Stable Diffusion and Large Language Models [1.1470070927586018]
The framework is validated by rendering high-quality child faces representing ethnicity data, micro expressions, face pose variations, eye blinking effects, different hair colours and styles, aging, multiple and different child gender subjects in a single frame.
The proposed method circumvents common issues encountered in generative AI tools, such as temporal inconsistency and limited control over the rendered outputs.
arXiv Detail & Related papers (2024-06-17T14:37:14Z) - Leveraging Synthetic Data for Generalizable and Fair Facial Action Unit Detection [9.404202619102943]
We propose to use synthetically generated data and multi-source domain adaptation (MSDA) to address the problems of the scarcity of labeled data and the diversity of subjects.
Specifically, we propose to generate a diverse dataset through synthetic facial expression re-targeting.
To further improve gender fairness, PM2 matches the features of the real data with a female and a male synthetic image.
arXiv Detail & Related papers (2024-03-15T23:50:18Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - SwinFace: A Multi-task Transformer for Face Recognition, Expression
Recognition, Age Estimation and Attribute Estimation [60.94239810407917]
This paper presents a multi-purpose algorithm for simultaneous face recognition, facial expression recognition, age estimation, and face attribute estimation based on a single Swin Transformer.
To address the conflicts among multiple tasks, a Multi-Level Channel Attention (MLCA) module is integrated into each task-specific analysis.
Experiments show that the proposed model has a better understanding of the face and achieves excellent performance for all tasks.
arXiv Detail & Related papers (2023-08-22T15:38:39Z) - A Comparative Study of Image-to-Image Translation Using GANs for
Synthetic Child Race Data [1.6536018920603175]
This work proposes the utilization of image-to-image transformation to synthesize data of different races and adjust the ethnicity of children's face data.
We consider ethnicity as a style and compare three different Image-to-Image neural network based methods to implement Caucasian child data and Asian child data conversion.
arXiv Detail & Related papers (2023-08-08T12:54:05Z) - Child Face Recognition at Scale: Synthetic Data Generation and
Performance Benchmark [3.4110993541168853]
HDA-SynChildFaces consists of 1,652 subjects and a total of 188,832 images, each subject being present at various ages and with many different intra-subject variations.
We evaluate the performance of various facial recognition systems on the generated database and compare the results of adults and children at different ages.
arXiv Detail & Related papers (2023-04-23T15:29:26Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Methodology for Building Synthetic Datasets with Virtual Humans [1.5556923898855324]
Large datasets can be used for improved, targeted training of deep neural networks.
In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities.
arXiv Detail & Related papers (2020-06-21T10:29:36Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.