A Comparative Study of Image-to-Image Translation Using GANs for
Synthetic Child Race Data
- URL: http://arxiv.org/abs/2308.04232v1
- Date: Tue, 8 Aug 2023 12:54:05 GMT
- Title: A Comparative Study of Image-to-Image Translation Using GANs for
Synthetic Child Race Data
- Authors: Wang Yao, Muhammad Ali Farooq, Joseph Lemley, Peter Corcoran
- Abstract summary: This work proposes the utilization of image-to-image transformation to synthesize data of different races and adjust the ethnicity of children's face data.
We consider ethnicity as a style and compare three different Image-to-Image neural network based methods to implement Caucasian child data and Asian child data conversion.
- Score: 1.6536018920603175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The lack of ethnic diversity in data has been a limiting factor of face
recognition techniques in the literature. This is particularly the case for
children where data samples are scarce and presents a challenge when seeking to
adapt machine vision algorithms that are trained on adult data to work on
children. This work proposes the utilization of image-to-image transformation
to synthesize data of different races and thus adjust the ethnicity of
children's face data. We consider ethnicity as a style and compare three
different Image-to-Image neural network based methods, specifically pix2pix,
CycleGAN, and CUT networks to implement Caucasian child data and Asian child
data conversion. Experimental validation results on synthetic data demonstrate
the feasibility of using image-to-image transformation methods to generate
various synthetic child data samples with broader ethnic diversity.
Related papers
- ChildDiffusion: Unlocking the Potential of Generative AI and Controllable Augmentations for Child Facial Data using Stable Diffusion and Large Language Models [1.1470070927586018]
The framework is validated by rendering high-quality child faces representing ethnicity data, micro expressions, face pose variations, eye blinking effects, different hair colours and styles, aging, multiple and different child gender subjects in a single frame.
The proposed method circumvents common issues encountered in generative AI tools, such as temporal inconsistency and limited control over the rendered outputs.
arXiv Detail & Related papers (2024-06-17T14:37:14Z) - Towards Inclusive Face Recognition Through Synthetic Ethnicity Alteration [11.451395489475647]
We explore ethnicity alteration and skin tone modification using synthetic face image generation methods to increase the diversity of datasets.
We conduct a detailed analysis by first constructing a balanced face image dataset representing three ethnicities: Asian, Black, and Indian.
We then make use of existing Generative Adversarial Network-based (GAN) image-to-image translation and manifold learning models to alter the ethnicity from one to another.
arXiv Detail & Related papers (2024-05-02T13:31:09Z) - ChildGAN: Large Scale Synthetic Child Facial Data Using Domain
Adaptation in StyleGAN [1.6536018920603175]
ChildGAN is built by performing smooth domain transfer using transfer learning.
The dataset comprises more than 300k distinct data samples.
The results demonstrate that synthetic child facial data of high quality offers an alternative to the cost and complexity of collecting a large-scale dataset from real children.
arXiv Detail & Related papers (2023-07-25T18:04:52Z) - Child Face Recognition at Scale: Synthetic Data Generation and
Performance Benchmark [3.4110993541168853]
HDA-SynChildFaces consists of 1,652 subjects and a total of 188,832 images, each subject being present at various ages and with many different intra-subject variations.
We evaluate the performance of various facial recognition systems on the generated database and compare the results of adults and children at different ages.
arXiv Detail & Related papers (2023-04-23T15:29:26Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Studying Bias in GANs through the Lens of Race [91.95264864405493]
We study how the performance and evaluation of generative image models are impacted by the racial composition of their training datasets.
Our results show that the racial compositions of generated images successfully preserve that of the training data.
However, we observe that truncation, a technique used to generate higher quality images during inference, exacerbates racial imbalances in the data.
arXiv Detail & Related papers (2022-09-06T22:25:56Z) - Random Network Distillation as a Diversity Metric for Both Image and
Text Generation [62.13444904851029]
We develop a new diversity metric that can be applied to data, both synthetic and natural, of any type.
We validate and deploy this metric on both images and text.
arXiv Detail & Related papers (2020-10-13T22:03:52Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z) - Exploring Racial Bias within Face Recognition via per-subject
Adversarially-Enabled Data Augmentation [15.924281804465252]
We propose a novel adversarial derived data augmentation methodology that aims to enable dataset balance at a per-subject level.
Our aim is to automatically construct a synthesised dataset by transforming facial images across varying racial domains.
In a side-by-side comparison, we show the positive impact our proposed technique can have on the recognition performance for (racial) minority groups.
arXiv Detail & Related papers (2020-04-19T19:46:32Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.