Uncovering Bias in Face Generation Models
- URL: http://arxiv.org/abs/2302.11562v1
- Date: Wed, 22 Feb 2023 18:57:35 GMT
- Title: Uncovering Bias in Face Generation Models
- Authors: Cristian Mu\~noz, Sara Zannone, Umar Mohammed, Adriano Koshiyama
- Abstract summary: Recent advancements in GANs and diffusion models have enabled the creation of high-resolution, hyper-realistic images.
These models may misrepresent certain social groups and present bias.
This work is a novel analysis covering and embedding spaces for fine-grained understanding of bias over three approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in GANs and diffusion models have enabled the creation of
high-resolution, hyper-realistic images. However, these models may misrepresent
certain social groups and present bias. Understanding bias in these models
remains an important research question, especially for tasks that support
critical decision-making and could affect minorities. The contribution of this
work is a novel analysis covering architectures and embedding spaces for
fine-grained understanding of bias over three approaches: generators, attribute
modifier, and post-processing bias mitigators. This work shows that generators
suffer from bias across all social groups with attribute preferences such as
between 75%-85% for whiteness and 60%-80% for the female gender (for all
trained CelebA models) and low probabilities of generating children and older
men. Modifier and mitigators work as post-processor and change the generator
performance. For instance, attribute channel perturbation strategies modify the
embedding spaces. We quantify the influence of this change on group fairness by
measuring the impact on image quality and group features. Specifically, we use
the Fr\'echet Inception Distance (FID), the Face Matching Error and the
Self-Similarity score. For Interfacegan, we analyze one and two attribute
channel perturbations and examine the effect on the fairness distribution and
the quality of the image. Finally, we analyzed the post-processing bias
mitigators, which are the fastest and most computationally efficient way to
mitigate bias. We find that these mitigation techniques show similar results on
KL divergence and FID score, however, self-similarity scores show a different
feature concentration on the new groups of the data distribution. The
weaknesses and ongoing challenges described in this work must be considered in
the pursuit of creating fair and unbiased face generation models.
Related papers
- Understanding trade-offs in classifier bias with quality-diversity optimization: an application to talent management [2.334978724544296]
A major struggle for the development of fair AI models lies in the bias implicit in the data available to train such models.
We propose a method for visualizing the biases inherent in a dataset and understanding the potential trade-offs between fairness and accuracy.
arXiv Detail & Related papers (2024-11-25T22:14:02Z) - FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Images Speak Louder than Words: Understanding and Mitigating Bias in Vision-Language Model from a Causal Mediation Perspective [13.486497323758226]
Vision-language models pre-trained on extensive datasets can inadvertently learn biases by correlating gender information with objects or scenarios.
We propose a framework that incorporates causal mediation analysis to measure and map the pathways of bias generation and propagation.
arXiv Detail & Related papers (2024-07-03T05:19:45Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Analyzing Bias in Diffusion-based Face Generation Models [75.80072686374564]
Diffusion models are increasingly popular in synthetic data generation and image editing applications.
We investigate the presence of bias in diffusion-based face generation models with respect to attributes such as gender, race, and age.
We examine how dataset size affects the attribute composition and perceptual quality of both diffusion and Generative Adversarial Network (GAN) based face generation models.
arXiv Detail & Related papers (2023-05-10T18:22:31Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.