Race Bias Analysis of Bona Fide Errors in face anti-spoofing
- URL: http://arxiv.org/abs/2210.05366v1
- Date: Tue, 11 Oct 2022 11:49:24 GMT
- Title: Race Bias Analysis of Bona Fide Errors in face anti-spoofing
- Authors: Latifah Abduh, Ioannis Ivrissimtzis
- Abstract summary: We present a systematic study of race bias in face anti-spoofing with three key characteristics.
The focus is on analysing potential bias in the bona fide errors, where significant ethical and legal issues lie.
We demonstrate the proposed bias analysis process on a VQ-VAE based face anti-spoofing algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of bias in Machine Learning is receiving a lot of attention in
recent years, however, few only papers deal explicitly with the problem of race
bias in face anti-spoofing. In this paper, we present a systematic study of
race bias in face anti-spoofing with three key characteristics: the focus is on
analysing potential bias in the bona fide errors, where significant ethical and
legal issues lie; the analysis is not restricted to the final binary outcomes
of the classifier, but also covers the classifier's scalar responses and its
latent space; the threshold determining the operating point of the classifier
is considered a variable. We demonstrate the proposed bias analysis process on
a VQ-VAE based face anti-spoofing algorithm, trained on the Replay Attack and
the Spoof in the Wild (SiW) databases, and analysed for bias on the SiW and
Racial Faces in the Wild (RFW), databases. The results demonstrate that race
bias is not necessarily the result of different mean response values among the
various populations. Instead, it can be better understood as the combined
effect of several possible characteristics of the response distributions:
different means; different variances; bimodal behaviour; existence of outliers.
Related papers
- Mitigating Bias for Question Answering Models by Tracking Bias Influence [84.66462028537475]
We propose BMBI, an approach to mitigate the bias of multiple-choice QA models.
Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance.
We show that our method could be applied to multiple QA formulations across multiple bias categories.
arXiv Detail & Related papers (2023-10-13T00:49:09Z) - Toward Understanding Bias Correlations for Mitigation in NLP [34.956581421295]
This work aims to provide a first systematic study toward understanding bias correlations in mitigation.
We examine bias mitigation in two common NLP tasks -- toxicity detection and word embeddings.
Our findings suggest that biases are correlated and present scenarios in which independent debiasing approaches may be insufficient.
arXiv Detail & Related papers (2022-05-24T22:48:47Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Information-Theoretic Bias Assessment Of Learned Representations Of
Pretrained Face Recognition [18.07966649678408]
We propose an information-theoretic, independent bias assessment metric to identify degree of bias against protected demographic attributes.
Our metric differs from other methods that rely on classification accuracy or examine the differences between ground truth and predicted labels of protected attributes predicted using a shallow network.
arXiv Detail & Related papers (2021-11-08T17:41:17Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to
Data Imbalance in Deep Learning Based Segmentation [1.6386696247541932]
"Fairness" in AI refers to assessing algorithms for potential bias based on demographic characteristics such as race and gender.
Deep learning (DL) in cardiac MR segmentation has led to impressive results in recent years, but no work has yet investigated the fairness of such models.
We find statistically significant differences in Dice performance between different racial groups.
arXiv Detail & Related papers (2021-06-23T13:27:35Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Interventions for Ranking in the Presence of Implicit Bias [34.23230188778088]
Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group.
Rooney Rule is a constraint to improve the utility of the outcome for certain cases of the subset selection problem.
We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias.
arXiv Detail & Related papers (2020-01-23T19:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.