FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems
- URL: http://arxiv.org/abs/2104.04107v1
- Date: Thu, 8 Apr 2021 23:00:25 GMT
- Title: FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems
- Authors: Liang Tong, Zhengzhang Chen, Jingchao Ni, Wei Cheng, Dongjin Song,
Haifeng Chen, Yevgeniy Vorobeychik
- Abstract summary: FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
- Score: 49.577302852655144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present FACESEC, a framework for fine-grained robustness evaluation of
face recognition systems. FACESEC evaluation is performed along four dimensions
of adversarial modeling: the nature of perturbation (e.g., pixel-level or face
accessories), the attacker's system knowledge (about training data and learning
architecture), goals (dodging or impersonation), and capability (tailored to
individual inputs or across sets of these). We use FACESEC to study five face
recognition systems in both closed-set and open-set settings, and to evaluate
the state-of-the-art approach for defending against physically realizable
attacks on these. We find that accurate knowledge of neural architecture is
significantly more important than knowledge of the training data in black-box
attacks. Moreover, we observe that open-set face recognition systems are more
vulnerable than closed-set systems under different types of attacks. The
efficacy of attacks for other threat model variations, however, appears highly
dependent on both the nature of perturbation and the neural network
architecture. For example, attacks that involve adversarial face masks are
usually more potent, even against adversarially trained models, and the ArcFace
architecture tends to be more robust than the others.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Quadruplet Loss For Improving the Robustness to Face Morphing Attacks [0.0]
Face Recognition Systems are vulnerable to sophisticated attacks, notably face morphing techniques.
We introduce a novel quadruplet loss function for increasing the robustness of face recognition systems against morphing attacks.
arXiv Detail & Related papers (2024-02-22T16:10:39Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Adversarial Attacks against Face Recognition: A Comprehensive Study [3.766020696203255]
Face recognition (FR) systems have demonstrated outstanding verification performance.
Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images.
arXiv Detail & Related papers (2020-07-22T22:46:00Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.