On the Robustness of Face Recognition Algorithms Against Attacks and
Bias
- URL: http://arxiv.org/abs/2002.02942v1
- Date: Fri, 7 Feb 2020 18:21:59 GMT
- Title: On the Robustness of Face Recognition Algorithms Against Attacks and
Bias
- Authors: Richa Singh, Akshay Agarwal, Maneet Singh, Shruti Nagpal, Mayank Vatsa
- Abstract summary: Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
- Score: 78.68458616687634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition algorithms have demonstrated very high recognition
performance, suggesting suitability for real world applications. Despite the
enhanced accuracies, robustness of these algorithms against attacks and bias
has been challenged. This paper summarizes different ways in which the
robustness of a face recognition algorithm is challenged, which can severely
affect its intended working. Different types of attacks such as physical
presentation attacks, disguise/makeup, digital adversarial attacks, and
morphing/tampering using GANs have been discussed. We also present a discussion
on the effect of bias on face recognition models and showcase that factors such
as age and gender variations affect the performance of modern algorithms. The
paper also presents the potential reasons for these challenges and some of the
future research directions for increasing the robustness of face recognition
models.
Related papers
- Quadruplet Loss For Improving the Robustness to Face Morphing Attacks [0.0]
Face Recognition Systems are vulnerable to sophisticated attacks, notably face morphing techniques.
We introduce a novel quadruplet loss function for increasing the robustness of face recognition systems against morphing attacks.
arXiv Detail & Related papers (2024-02-22T16:10:39Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Eight Years of Face Recognition Research: Reproducibility, Achievements
and Open Issues [6.608320705848282]
Many different face recognition algorithms have been proposed in the last thirty years of intensive research in the field.
From the year 2015, state-of-the-art face recognition has been rooted in deep learning models.
This work is a followup from our previous works developed in 2014 and eventually published in 2016, showing the impact of various facial aspects on face recognition algorithms.
arXiv Detail & Related papers (2022-08-08T10:40:29Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Controllable Evaluation and Generation of Physical Adversarial Patch on
Face Recognition [49.42127182149948]
Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches.
We propose to simulate the complex transformations of faces in the physical world via 3D-face modeling.
We further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations.
arXiv Detail & Related papers (2022-03-09T10:21:40Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.