Contactless Fingerprint Biometric Anti-Spoofing: An Unsupervised Deep
Learning Approach
- URL: http://arxiv.org/abs/2311.04148v1
- Date: Tue, 7 Nov 2023 17:19:59 GMT
- Title: Contactless Fingerprint Biometric Anti-Spoofing: An Unsupervised Deep
Learning Approach
- Authors: Banafsheh Adami and Nima Karimian
- Abstract summary: We introduce an innovative anti-spoofing approach that combines an unsupervised autoencoder with a convolutional block attention module.
The scheme has achieved an average BPCER of 0.96% with an APCER of 1.6% for presentation attacks involving various types of spoofed samples.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contactless fingerprint recognition offers a higher level of user comfort and
addresses hygiene concerns more effectively. However, it is also more
vulnerable to presentation attacks such as photo paper, paper-printout, and
various display attacks, which makes it more challenging to implement in
biometric systems compared to contact-based modalities. Limited research has
been conducted on presentation attacks in contactless fingerprint systems, and
these studies have encountered challenges in terms of generalization and
scalability since both bonafide samples and presentation attacks are utilized
during training model. Although this approach appears promising, it lacks the
ability to handle unseen attacks, which is a crucial factor for developing PAD
methods that can generalize effectively. We introduced an innovative
anti-spoofing approach that combines an unsupervised autoencoder with a
convolutional block attention module to address the limitations of existing
methods. Our model is exclusively trained on bonafide images without exposure
to any spoofed samples during the training phase. It is then evaluated against
various types of presentation attack images in the testing phase. The scheme we
proposed has achieved an average BPCER of 0.96\% with an APCER of 1.6\% for
presentation attacks involving various types of spoofed samples.
Related papers
- Unsupervised Fingerphoto Presentation Attack Detection With Diffusion Models [8.979820109339286]
Smartphone-based contactless fingerphoto authentication has become a reliable alternative to traditional contact-based fingerprint biometric systems.
Despite its convenience, fingerprint authentication through fingerphotos is more vulnerable to presentation attacks.
We propose a novel unsupervised approach based on a state-of-the-art deep-learning-based diffusion model, the Denoising Probabilistic Diffusion Model (DDPM)
The proposed approach detects Presentation Attacks (PA) by calculating the reconstruction similarity between the input and output pairs of the DDPM.
arXiv Detail & Related papers (2024-09-27T11:07:48Z) - Self-Supervised Representation Learning for Adversarial Attack Detection [6.528181610035978]
Supervised learning-based adversarial attack detection methods rely on a large number of labeled data.
We propose a self-supervised representation learning framework for the adversarial attack detection task to address this drawback.
arXiv Detail & Related papers (2024-07-05T09:37:16Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - A Universal Anti-Spoofing Approach for Contactless Fingerprint Biometric
Systems [0.0]
We propose a universal presentation attack detection method for contactless fingerprints.
We generated synthetic contactless fingerprints using StyleGAN from live finger photos and integrating them to train a semi-supervised ResNet-18 model.
A novel joint loss function, combining the Arcface and Center loss, is introduced with a regularization to balance between the two loss functions.
arXiv Detail & Related papers (2023-10-23T15:46:47Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Attacking Face Recognition with T-shirts: Database, Vulnerability
Assessment and Detection [0.0]
We propose a new T-shirt Face Presentation Attack database of 1,608 T-shirt attacks using 100 unique presentation attack instruments.
We show that this type of attack can compromise the security of face recognition systems and that some state-of-the-art attack detection mechanisms fail to robustly generalize to the new attacks.
arXiv Detail & Related papers (2022-11-14T14:11:23Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - On the Effectiveness of Vision Transformers for Zero-shot Face
Anti-Spoofing [7.665392786787577]
In this work, we use transfer learning from the vision transformer model for the zero-shot anti-spoofing task.
The proposed approach outperforms the state-of-the-art methods in the zero-shot protocols in the HQ-WMCA and SiW-M datasets by a large margin.
arXiv Detail & Related papers (2020-11-16T15:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.