Unified Face Matching and Physical-Digital Spoofing Attack Detection
- URL: http://arxiv.org/abs/2501.09635v1
- Date: Thu, 16 Jan 2025 16:24:21 GMT
- Title: Unified Face Matching and Physical-Digital Spoofing Attack Detection
- Authors: Arun Kunwar, Ajita Rattani,
- Abstract summary: Face recognition systems face increasing threats from physical and digital spoofing attacks.
This paper introduces an innovative unified model designed for face recognition and detection of physical and digital attacks.
By leveraging the advanced Swin Transformer backbone and HiLo attention in a convolutional neural network framework, we address unified face recognition and spoof attack detection more effectively.
- Score: 3.9440964696313485
- License:
- Abstract: Face recognition technology has dramatically transformed the landscape of security, surveillance, and authentication systems, offering a user-friendly and non-invasive biometric solution. However, despite its significant advantages, face recognition systems face increasing threats from physical and digital spoofing attacks. Current research typically treats face recognition and attack detection as distinct classification challenges. This approach necessitates the implementation of separate models for each task, leading to considerable computational complexity, particularly on devices with limited resources. Such inefficiencies can stifle scalability and hinder performance. In response to these challenges, this paper introduces an innovative unified model designed for face recognition and detection of physical and digital attacks. By leveraging the advanced Swin Transformer backbone and incorporating HiLo attention in a convolutional neural network framework, we address unified face recognition and spoof attack detection more effectively. Moreover, we introduce augmentation techniques that replicate the traits of physical and digital spoofing cues, significantly enhancing our model robustness. Through comprehensive experimental evaluation across various datasets, we showcase the effectiveness of our model in unified face recognition and spoof detection. Additionally, we confirm its resilience against unseen physical and digital spoofing attacks, underscoring its potential for real-world applications.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Quadruplet Loss For Improving the Robustness to Face Morphing Attacks [0.0]
Face Recognition Systems are vulnerable to sophisticated attacks, notably face morphing techniques.
We introduce a novel quadruplet loss function for increasing the robustness of face recognition systems against morphing attacks.
arXiv Detail & Related papers (2024-02-22T16:10:39Z) - TetraLoss: Improving the Robustness of Face Recognition against Morphing Attacks [6.492755549391469]
Face recognition systems are widely deployed in high-security applications.
Digital manipulations, such as face morphing, pose a security threat to face recognition systems.
We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Controllable Evaluation and Generation of Physical Adversarial Patch on
Face Recognition [49.42127182149948]
Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches.
We propose to simulate the complex transformations of faces in the physical world via 3D-face modeling.
We further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations.
arXiv Detail & Related papers (2022-03-09T10:21:40Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.