There is an elephant in the room: Towards a critique on the use of
fairness in biometrics
- URL: http://arxiv.org/abs/2112.11193v1
- Date: Thu, 16 Dec 2021 10:32:41 GMT
- Title: There is an elephant in the room: Towards a critique on the use of
fairness in biometrics
- Authors: Ana Valdivia, J\'ulia Corbera-Serraj\`ordia, Aneta Swianiewicz
- Abstract summary: We offer a critical reading of recent debates about biometric fairness.
We show that biometric fairness criteria are mathematically mutually exclusive.
We discuss the politics of fairness in biometrics by situating the debate at the border.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In 2019, the UK's Immigration and Asylum Chamber of the Upper Tribunal
dismissed an asylum appeal basing the decision on the output of a biometric
system, alongside other discrepancies. The fingerprints of the asylum seeker
were found in a biometric database which contradicted the appellant's account.
The Tribunal found this evidence unequivocal and denied the asylum claim.
Nowadays, the proliferation of biometric systems is shaping public debates
around its political, social and ethical implications. Yet whilst concerns
towards the racialised use of this technology for migration control have been
on the rise, investment in the biometrics industry and innovation is increasing
considerably. Moreover, fairness has also been recently adopted by biometrics
to mitigate bias and discrimination on biometrics. However, algorithmic
fairness cannot distribute justice in scenarios which are broken or intended
purpose is to discriminate, such as biometrics deployed at the border.
In this paper, we offer a critical reading of recent debates about biometric
fairness and show its limitations drawing on research in fairness in machine
learning and critical border studies. Building on previous fairness
demonstrations, we prove that biometric fairness criteria are mathematically
mutually exclusive. Then, the paper moves on illustrating empirically that a
fair biometric system is not possible by reproducing experiments from previous
works. Finally, we discuss the politics of fairness in biometrics by situating
the debate at the border. We claim that bias and error rates have different
impact on citizens and asylum seekers. Fairness has overshadowed the elephant
in the room of biometrics, focusing on the demographic biases and ethical
discourses of algorithms rather than examine how these systems reproduce
historical and political injustices.
Related papers
- Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - The theoretical limits of biometry [0.0]
We propose a theoretical analysis of the distinguishability problem, which governs the error rates of biometric systems.
We demonstrate simple relationships between the population size and the number of independent bits necessary to prevent collision in the presence of noise.
The results are very encouraging, as the biometry of the whole Earth population can fit in a regular disk, leaving some space for noise and redundancy.
arXiv Detail & Related papers (2023-11-06T08:28:12Z) - Facial Soft Biometrics for Recognition in the Wild: Recent Works,
Annotation, and COTS Evaluation [63.05890836038913]
We study the role of soft biometrics to enhance person recognition systems in unconstrained scenarios.
We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems.
Experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning.
arXiv Detail & Related papers (2022-10-24T11:29:57Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Demographic Fairness in Biometric Systems: What do the Experts say? [16.72651695033691]
Algorithmic decision systems have been labelled as "biased", "racist", "sexist", or "unfair"
There is an ongoing debate about whether such assessments are justified and whether citizens and policymakers should be concerned.
Recently, the European Association for Biometrics organised an event series with "demographic fairness in biometric systems" as an overarching theme.
arXiv Detail & Related papers (2021-05-31T09:58:51Z) - Biometrics: Trust, but Verify [49.9641823975828]
Biometric recognition has exploded into a plethora of different applications around the globe.
There are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems.
arXiv Detail & Related papers (2021-05-14T03:07:25Z) - Fairness in Biometrics: a figure of merit to assess biometric
verification systems [1.218340575383456]
We introduce the first figure of merit that is able to evaluate and compare fairness aspects between multiple biometric verification systems.
A use case with two synthetic biometric systems is introduced and demonstrates the potential of this figure of merit.
Second, a use case using face biometrics is presented where several systems are evaluated compared with this new figure of merit.
arXiv Detail & Related papers (2020-11-04T16:46:37Z) - Demographic Bias: A Challenge for Fingervein Recognition Systems? [0.0]
Concerns regarding potential biases in the underlying algorithms of many automated systems (including biometrics) have been raised.
A biased algorithm produces statistically different outcomes for different groups of individuals based on certain (often protected by anti-discrimination legislation) attributes such as sex and age.
In this paper, several popular types of recognition algorithms are benchmarked to ascertain the matter for fingervein recognition.
arXiv Detail & Related papers (2020-04-03T07:53:11Z) - Demographic Bias in Biometrics: A Survey on an Emerging Challenge [0.0]
Biometric systems rely on the uniqueness of certain biological or forensics characteristics of human beings.
There has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems.
arXiv Detail & Related papers (2020-03-05T09:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.