Algorithmic Fairness in Face Morphing Attack Detection
- URL: http://arxiv.org/abs/2111.12115v1
- Date: Tue, 23 Nov 2021 19:16:04 GMT
- Title: Algorithmic Fairness in Face Morphing Attack Detection
- Authors: Raghavendra Ramachandra, Kiran Raja, Christoph Busch
- Abstract summary: Face Morphing Attack Detection (MAD) techniques have been developed in recent past to deter such attacks and mitigate risks from morphing attacks.
MAD algorithms should treat the images of subjects from different ethnic origins in an equal manner and provide non-discriminatory results.
While the promising MAD algorithms are tested for robustness, there is no study comprehensively bench-marking their behaviour against various ethnicities.
- Score: 12.031583036177386
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face morphing attacks can compromise Face Recognition System (FRS) by
exploiting their vulnerability. Face Morphing Attack Detection (MAD) techniques
have been developed in recent past to deter such attacks and mitigate risks
from morphing attacks. MAD algorithms, as any other algorithms should treat the
images of subjects from different ethnic origins in an equal manner and provide
non-discriminatory results. While the promising MAD algorithms are tested for
robustness, there is no study comprehensively bench-marking their behaviour
against various ethnicities. In this paper, we study and present a
comprehensive analysis of algorithmic fairness of the existing Single
image-based Morph Attack Detection (S-MAD) algorithms. We attempt to better
understand the influence of ethnic bias on MAD algorithms and to this extent,
we study the performance of MAD algorithms on a newly created dataset
consisting of four different ethnic groups. With Extensive experiments using
six different S-MAD techniques, we first present benchmark of detection
performance and then measure the quantitative value of the algorithmic fairness
for each of them using Fairness Discrepancy Rate (FDR). The results indicate
the lack of fairness on all six different S-MAD methods when trained and tested
on different ethnic groups suggesting the need for reliable MAD approaches to
mitigate the algorithmic bias.
Related papers
- Re-evaluation of Face Anti-spoofing Algorithm in Post COVID-19 Era Using Mask Based Occlusion Attack [4.550965216676562]
Face anti-spoofing algorithms play a pivotal role in the robust deployment of face recognition systems against presentation attacks.
We have used five variants of masks to cover the lower part of the face with varying coverage areas.
We have also used different variants of glasses that cover the upper part of the face.
arXiv Detail & Related papers (2024-08-23T17:48:22Z) - Greedy-DiM: Greedy Algorithms for Unreasonably Effective Face Morphs [2.0795007613453445]
Diffusion Morphs (DiM) are a recently proposed morphing attack that has achieved state-of-the-art performance for representation-based morphing attacks.
We propose a greedy strategy on the iterative sampling process of DiM models which searches for an optimal step guided by an identity-based function.
We find that our proposed algorithm is unreasonably effective, fooling all of the tested FR systems with an MMPMR of 100%, outperforming all other morphing algorithms compared.
arXiv Detail & Related papers (2024-04-09T05:21:32Z) - IM-IAD: Industrial Image Anomaly Detection Benchmark in Manufacturing [88.35145788575348]
Image anomaly detection (IAD) is an emerging and vital computer vision task in industrial manufacturing.
The lack of a uniform IM benchmark is hindering the development and usage of IAD methods in real-world applications.
We construct a comprehensive image anomaly detection benchmark (IM-IAD), which includes 19 algorithms on seven major datasets.
arXiv Detail & Related papers (2023-01-31T01:24:45Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - An Empirical Study of Derivative-Free-Optimization Algorithms for
Targeted Black-Box Attacks in Deep Neural Networks [8.368543987898732]
This paper considers four pre-existing state-of-the-art DFO-based algorithms along with the introduction of a new algorithm built on BOBYQA.
We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify.
Experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack.
arXiv Detail & Related papers (2020-12-03T13:32:20Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Morphing Attack Detection -- Database, Evaluation Platform and
Benchmarking [16.77282920396874]
Morphing attacks have posed a severe threat to Face Recognition System (FRS)
Despite the number of advancements reported in recent works, we note serious open issues such as independent benchmarking, generalizability challenges and considerations to age, gender, ethnicity that are inadequately addressed.
In this work, we present a new sequestered dataset for facilitating the advancements of MAD.
arXiv Detail & Related papers (2020-06-11T14:11:09Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.