Improving Fairness using Vision-Language Driven Image Augmentation
- URL: http://arxiv.org/abs/2311.01573v1
- Date: Thu, 2 Nov 2023 19:51:10 GMT
- Title: Improving Fairness using Vision-Language Driven Image Augmentation
- Authors: Moreno D'Inc\`a, Christos Tzelepis, Ioannis Patras, Nicu Sebe
- Abstract summary: Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
- Score: 60.428157003498995
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fairness is crucial when training a deep-learning discriminative model,
especially in the facial domain. Models tend to correlate specific
characteristics (such as age and skin color) with unrelated attributes
(downstream tasks), resulting in biases which do not correspond to reality. It
is common knowledge that these correlations are present in the data and are
then transferred to the models during training. This paper proposes a method to
mitigate these correlations to improve fairness. To do so, we learn
interpretable and meaningful paths lying in the semantic space of a pre-trained
diffusion model (DiffAE) -- such paths being supervised by contrastive text
dipoles. That is, we learn to edit protected characteristics (age and skin
color). These paths are then applied to augment images to improve the fairness
of a given dataset. We test the proposed method on CelebA-HQ and UTKFace on
several downstream tasks with age and skin color as protected characteristics.
As a proxy for fairness, we compute the difference in accuracy with respect to
the protected characteristics. Quantitative results show how the augmented
images help the model improve the overall accuracy, the aforementioned metric,
and the disparity of equal opportunity. Code is available at:
https://github.com/Moreno98/Vision-Language-Bias-Control.
Related papers
- FineFACE: Fair Facial Attribute Classification Leveraging Fine-grained Features [3.9440964696313485]
Research highlights the presence of demographic bias in automated facial attribute classification algorithms.
Existing bias mitigation techniques typically require demographic annotations and often obtain a trade-off between fairness and accuracy.
This paper proposes a novel approach to fair facial attribute classification by framing it as a fine-grained classification problem.
arXiv Detail & Related papers (2024-08-29T20:08:22Z) - Leveraging vision-language models for fair facial attribute classification [19.93324644519412]
General-purpose vision-language model (VLM) is a rich knowledge source for common sensitive attributes.
We analyze the correspondence between VLM predicted and human defined sensitive attribute distribution.
Experiments on multiple benchmark facial attribute classification datasets show fairness gains of the model over existing unsupervised baselines.
arXiv Detail & Related papers (2024-03-15T18:37:15Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - Model Debiasing via Gradient-based Explanation on Representation [14.673988027271388]
We propose a novel fairness framework that performs debiasing with regard to sensitive attributes and proxy attributes.
Our framework achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-20T11:57:57Z) - Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners [88.07317175639226]
We propose a novel approach, Discriminative Stable Diffusion (DSD), which turns pre-trained text-to-image diffusion models into few-shot discriminative learners.
Our approach mainly uses the cross-attention score of a Stable Diffusion model to capture the mutual influence between visual and textual information.
arXiv Detail & Related papers (2023-05-18T05:41:36Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.