Understanding Gender and Racial Disparities in Image Recognition Models
- URL: http://arxiv.org/abs/2107.09211v1
- Date: Tue, 20 Jul 2021 01:05:31 GMT
- Title: Understanding Gender and Racial Disparities in Image Recognition Models
- Authors: Rohan Mahadev, Anindya Chakravarti
- Abstract summary: We investigate a multi-label softmax loss with cross-entropy as the loss function instead of a binary cross-entropy on a multi-label classification problem.
We use the MR2 dataset to evaluate the fairness in the model outcomes and try to interpret the mistakes by looking at model activations and suggest possible fixes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large scale image classification models trained on top of popular datasets
such as Imagenet have shown to have a distributional skew which leads to
disparities in prediction accuracies across different subsections of population
demographics. A lot of approaches have been made to solve for this
distributional skew using methods that alter the model pre, post and during
training. We investigate one such approach - which uses a multi-label softmax
loss with cross-entropy as the loss function instead of a binary cross-entropy
on a multi-label classification problem on the Inclusive Images dataset which
is a subset of the OpenImages V6 dataset. We use the MR2 dataset, which
contains images of people with self-identified gender and race attributes to
evaluate the fairness in the model outcomes and try to interpret the mistakes
by looking at model activations and suggest possible fixes.
Related papers
- Hybrid diffusion models: combining supervised and generative pretraining for label-efficient fine-tuning of segmentation models [55.2480439325792]
We propose a new pretext task, which is to perform simultaneously image denoising and mask prediction on the first domain.
We show that fine-tuning a model pretrained using this approach leads to better results than fine-tuning a similar model trained using either supervised or unsupervised pretraining.
arXiv Detail & Related papers (2024-08-06T20:19:06Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Incorporating Crowdsourced Annotator Distributions into Ensemble
Modeling to Improve Classification Trustworthiness for Ancient Greek Papyri [3.870354915766567]
Two issues which complicate the problem on such datasets are class imbalance and ground-truth uncertainty in labeling.
The application of ensemble modeling to such datasets can help identify images where the ground-truth is questionable and quantify the trustworthiness of those samples.
arXiv Detail & Related papers (2022-10-28T19:39:14Z) - Estimating Appearance Models for Image Segmentation via Tensor
Factorization [0.0]
We propose a new approach to directly estimate appearance models from the image without prior information on the underlying segmentation.
Our method uses local high order color statistics from the image as an input to tensor factorization-based estimator for latent variable models.
This approach is able to estimate models in multiregion images and automatically output the regions proportions without prior user interaction.
arXiv Detail & Related papers (2022-08-16T17:21:00Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Background Splitting: Finding Rare Classes in a Sea of Background [55.03789745276442]
We focus on the real-world problem of training accurate deep models for image classification of a small number of rare categories.
In these scenarios, almost all images belong to the background category in the dataset (>95% of the dataset is background)
We demonstrate that both standard fine-tuning approaches and state-of-the-art approaches for training on imbalanced datasets do not produce accurate deep models in the presence of this extreme imbalance.
arXiv Detail & Related papers (2020-08-28T23:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.