Demographic Bias: A Challenge for Fingervein Recognition Systems?
- URL: http://arxiv.org/abs/2004.01418v1
- Date: Fri, 3 Apr 2020 07:53:11 GMT
- Title: Demographic Bias: A Challenge for Fingervein Recognition Systems?
- Authors: P. Drozdowski, B. Prommegger, G. Wimmer, R. Schraml, C. Rathgeb, A.
Uhl, C. Busch
- Abstract summary: Concerns regarding potential biases in the underlying algorithms of many automated systems (including biometrics) have been raised.
A biased algorithm produces statistically different outcomes for different groups of individuals based on certain (often protected by anti-discrimination legislation) attributes such as sex and age.
In this paper, several popular types of recognition algorithms are benchmarked to ascertain the matter for fingervein recognition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, concerns regarding potential biases in the underlying algorithms of
many automated systems (including biometrics) have been raised. In this
context, a biased algorithm produces statistically different outcomes for
different groups of individuals based on certain (often protected by
anti-discrimination legislation) attributes such as sex and age. While several
preliminary studies investigating this matter for facial recognition algorithms
do exist, said topic has not yet been addressed for vascular biometric
characteristics. Accordingly, in this paper, several popular types of
recognition algorithms are benchmarked to ascertain the matter for fingervein
recognition. The experimental evaluation suggests lack of bias for the tested
algorithms, although future works with larger datasets are needed to validate
and confirm those preliminary results.
Related papers
- Facial Soft Biometrics for Recognition in the Wild: Recent Works,
Annotation, and COTS Evaluation [63.05890836038913]
We study the role of soft biometrics to enhance person recognition systems in unconstrained scenarios.
We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems.
Experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning.
arXiv Detail & Related papers (2022-10-24T11:29:57Z) - Evaluating Proposed Fairness Models for Face Recognition Algorithms [0.0]
This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe.
We propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure.
We believe this is currently the largest open-source dataset of its kind.
arXiv Detail & Related papers (2022-03-09T21:16:43Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal
Biometric Fusion Algorithms [58.156733807470395]
This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign.
The campaign targeted the application of physical access control in a medium-size establishment with some 500 persons.
To the best of our knowledge, this is the first attempt to benchmark quality-based multimodal fusion algorithms.
arXiv Detail & Related papers (2021-11-17T13:39:48Z) - Biometrics: Trust, but Verify [49.9641823975828]
Biometric recognition has exploded into a plethora of different applications around the globe.
There are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems.
arXiv Detail & Related papers (2021-05-14T03:07:25Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning [17.088716485755917]
We propose a discrimination-aware learning method to improve accuracy and fairness of biased face recognition algorithms.
We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present a strong algorithmic discrimination.
Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness.
arXiv Detail & Related papers (2020-04-22T10:32:16Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Demographic Bias in Presentation Attack Detection of Iris Recognition
Systems [15.15287401843062]
We investigate and analyze the demographic bias in presentation attack detection (PAD) algorithms.
We adapt the notions of differential performance and differential outcome to the PAD problem.
Experiments show that female users will be significantly less protected by the PAD, in comparison to males.
arXiv Detail & Related papers (2020-03-06T12:16:19Z) - Demographic Bias in Biometrics: A Survey on an Emerging Challenge [0.0]
Biometric systems rely on the uniqueness of certain biological or forensics characteristics of human beings.
There has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems.
arXiv Detail & Related papers (2020-03-05T09:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.