Attack Analysis of Face Recognition Authentication Systems Using Fast
Gradient Sign Method
- URL: http://arxiv.org/abs/2203.05653v1
- Date: Thu, 10 Mar 2022 21:35:59 GMT
- Title: Attack Analysis of Face Recognition Authentication Systems Using Fast
Gradient Sign Method
- Authors: Arbena Musa, Kamer Vishi, Blerim Rexha
- Abstract summary: This paper analyzes and presents the Fast Gradient Sign Method (FGSM) attack using face recognition for biometric authentication.
Machine Learning techniques have been used to train and test the model, which can classify and identify different people's faces.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Biometric authentication methods, representing the "something you are"
scheme, are considered the most secure approach for gaining access to protected
resources. Recent attacks using Machine Learning techniques demand a serious
systematic reevaluation of biometric authentication. This paper analyzes and
presents the Fast Gradient Sign Method (FGSM) attack using face recognition for
biometric authentication. Machine Learning techniques have been used to train
and test the model, which can classify and identify different people's faces
and which will be used as a target for carrying out the attack. Furthermore,
the case study will analyze the implementation of the FGSM and the level of
performance reduction that the model will have by applying this method in
attacking. The test results were performed with the change of parameters both
in terms of training and attacking the model, thus showing the efficiency of
applying the FGSM.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Attacking Face Recognition with T-shirts: Database, Vulnerability
Assessment and Detection [0.0]
We propose a new T-shirt Face Presentation Attack database of 1,608 T-shirt attacks using 100 unique presentation attack instruments.
We show that this type of attack can compromise the security of face recognition systems and that some state-of-the-art attack detection mechanisms fail to robustly generalize to the new attacks.
arXiv Detail & Related papers (2022-11-14T14:11:23Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Data-driven behavioural biometrics for continuous and adaptive user
verification using Smartphone and Smartwatch [0.0]
We propose an algorithm to blend behavioural biometrics with multi-factor authentication (MFA)
This work proposes a two-step user verification algorithm that verifies the user's identity using motion-based biometrics.
arXiv Detail & Related papers (2021-10-07T02:46:21Z) - AuthNet: A Deep Learning based Authentication Mechanism using Temporal
Facial Feature Movements [0.0]
We propose an alternative authentication mechanism that uses both facial recognition and the unique movements of that particular face while uttering a password.
The proposed model is not inhibited by language barriers because a user can set a password in any language.
arXiv Detail & Related papers (2020-12-04T10:46:12Z) - On the Effectiveness of Vision Transformers for Zero-shot Face
Anti-Spoofing [7.665392786787577]
In this work, we use transfer learning from the vision transformer model for the zero-shot anti-spoofing task.
The proposed approach outperforms the state-of-the-art methods in the zero-shot protocols in the HQ-WMCA and SiW-M datasets by a large margin.
arXiv Detail & Related papers (2020-11-16T15:14:59Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.