Improving Fairness in Image Classification via Sketching
- URL: http://arxiv.org/abs/2211.00168v1
- Date: Mon, 31 Oct 2022 22:26:32 GMT
- Title: Improving Fairness in Image Classification via Sketching
- Authors: Ruichen Yao, Ziteng Cui, Xiaoxiao Li, Lin Gu
- Abstract summary: Deep neural networks (DNNs) tend to make unfair predictions when the training data are collected from different sub-populations.
We propose to use sketching to handle this phenomenon.
We evaluate our method through extensive experiments on both general scene dataset and medical scene dataset.
- Score: 14.154930352612926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness is a fundamental requirement for trustworthy and human-centered
Artificial Intelligence (AI) system. However, deep neural networks (DNNs) tend
to make unfair predictions when the training data are collected from different
sub-populations with different attributes (i.e. color, sex, age), leading to
biased DNN predictions. We notice that such a troubling phenomenon is often
caused by data itself, which means that bias information is encoded to the DNN
along with the useful information (i.e. class information, semantic
information). Therefore, we propose to use sketching to handle this phenomenon.
Without losing the utility of data, we explore the image-to-sketching methods
that can maintain useful semantic information for the target classification
while filtering out the useless bias information. In addition, we design a fair
loss to further improve the model fairness. We evaluate our method through
extensive experiments on both general scene dataset and medical scene dataset.
Our results show that the desired image-to-sketching method improves model
fairness and achieves satisfactory results among state-of-the-art.
Related papers
- DataDream: Few-shot Guided Dataset Generation [90.09164461462365]
We propose a framework for synthesizing classification datasets that more faithfully represents the real data distribution.
DataDream fine-tunes LoRA weights for the image generation model on the few real images before generating the training data using the adapted model.
We then fine-tune LoRA weights for CLIP using the synthetic data to improve downstream image classification over previous approaches on a large variety of datasets.
arXiv Detail & Related papers (2024-07-15T17:10:31Z) - Mitigating Bias Using Model-Agnostic Data Attribution [2.9868610316099335]
Mitigating bias in machine learning models is a critical endeavor for ensuring fairness and equity.
We propose a novel approach to address bias by leveraging pixel image attributions to identify and regularize regions of images containing bias attributes.
arXiv Detail & Related papers (2024-05-08T13:00:56Z) - Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement [3.0820287240219795]
We propose a novel approach to mitigate biases in computer vision models by utilizing counterfactual generation and fine-tuning.
Our approach leverages a curriculum learning framework combined with a fine-grained adversarial loss to fine-tune the model using adversarial examples.
We validate our approach through both qualitative and quantitative assessments, demonstrating improved bias mitigation and accuracy compared to existing methods.
arXiv Detail & Related papers (2024-04-18T00:41:32Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - DASH: Visual Analytics for Debiasing Image Classification via
User-Driven Synthetic Data Augmentation [27.780618650580923]
Image classification models often learn to predict a class based on irrelevant co-occurrences between input features and an output class in training data.
We call the unwanted correlations "data biases," and the visual features causing data biases "bias factors"
It is challenging to identify and mitigate biases automatically without human intervention.
arXiv Detail & Related papers (2022-09-14T00:44:41Z) - Does Data Repair Lead to Fair Models? Curating Contextually Fair Data To
Reduce Model Bias [10.639605996067534]
Contextual information is a valuable cue for Deep Neural Networks (DNNs) to learn better representations and improve accuracy.
In COCO, many object categories have a much higher co-occurrence with men compared to women, which can bias a DNN's prediction in favor of men.
We introduce a data repair algorithm using the coefficient of variation, which can curate fair and contextually balanced data for a protected class.
arXiv Detail & Related papers (2021-10-20T06:00:03Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.