Towards Measuring Bias in Image Classification
- URL: http://arxiv.org/abs/2107.00360v1
- Date: Thu, 1 Jul 2021 10:50:39 GMT
- Title: Towards Measuring Bias in Image Classification
- Authors: Nina Schaaf, Omar de Mitri, Hang Beom Kim, Alexander Windberger, Marco
F. Huber
- Abstract summary: Convolutional Neural Networks (CNN) have become state-of-the-art for the main computer vision tasks.
However, due to the complex structure their decisions are hard to understand which limits their use in some context of the industrial world.
We present a systematic approach to uncover data bias by means of attribution maps.
- Score: 61.802949761385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNN) have become de fact state-of-the-art for
the main computer vision tasks. However, due to the complex underlying
structure their decisions are hard to understand which limits their use in some
context of the industrial world. A common and hard to detect challenge in
machine learning (ML) tasks is data bias. In this work, we present a systematic
approach to uncover data bias by means of attribution maps. For this purpose,
first an artificial dataset with a known bias is created and used to train
intentionally biased CNNs. The networks' decisions are then inspected using
attribution maps. Finally, meaningful metrics are used to measure the
attribution maps' representativeness with respect to the known bias. The
proposed study shows that some attribution map techniques highlight the
presence of bias in the data better than others and metrics can support the
identification of bias.
Related papers
- Data Bias Management [17.067962372238135]
We show how bias in data affects end users, where bias is originated, and provide a viewpoint about what we should do about it.
We argue that data bias is not something that should necessarily be removed in all cases, and that research attention should instead shift from bias removal to bias management.
arXiv Detail & Related papers (2023-05-15T10:07:27Z) - Mitigating Relational Bias on Knowledge Graphs [51.346018842327865]
We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
arXiv Detail & Related papers (2022-11-26T05:55:34Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - De-biasing facial detection system using VAE [0.0]
Bias in AI/ML-based systems is a ubiquitous problem and bias in AI/ML systems may negatively impact society.
The proposed approach uses generative models which are best suited for learning underlying features.
With the help of an algorithm, the bias present in the dataset can be removed.
arXiv Detail & Related papers (2022-04-16T11:24:37Z) - Towards Learning an Unbiased Classifier from Biased Data via Conditional
Adversarial Debiasing [17.113618920885187]
We present a novel adversarial debiasing method, which addresses a feature that is spuriously connected to the labels of training images.
We argue by a mathematical proof that our approach is superior to existing techniques for the abovementioned bias.
Our experiments show that our approach performs better than state-of-the-art techniques on a well-known benchmark dataset with real-world images of cats and dogs.
arXiv Detail & Related papers (2021-03-10T16:50:42Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z) - REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets [64.76453161039973]
REVISE (REvealing VIsual biaSEs) is a tool that assists in the investigation of a visual dataset.
It surfacing potential biases along three dimensions: (1) object-based, (2) person-based, and (3) geography-based.
arXiv Detail & Related papers (2020-04-16T23:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.