Open-Set: ID Card Presentation Attack Detection using Neural Transfer
Style
- URL: http://arxiv.org/abs/2312.13993v1
- Date: Thu, 21 Dec 2023 16:28:08 GMT
- Title: Open-Set: ID Card Presentation Attack Detection using Neural Transfer
Style
- Authors: Reuben Markham, Juan M. Espin, Mario Nieto-Hidalgo, Juan E. Tapia
- Abstract summary: This work explores ID card Presentation Attack Instruments (PAI) in order to improve the generation of samples with four Generative Adversarial Networks (GANs) based image translation models.
We obtain an EER performance increase of 0.63% points for print attacks and a loss of 0.29% for screen capture attacks.
- Score: 2.946386240942919
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The accurate detection of ID card Presentation Attacks (PA) is becoming
increasingly important due to the rising number of online/remote services that
require the presentation of digital photographs of ID cards for digital
onboarding or authentication. Furthermore, cybercriminals are continuously
searching for innovative ways to fool authentication systems to gain
unauthorized access to these services. Although advances in neural network
design and training have pushed image classification to the state of the art,
one of the main challenges faced by the development of fraud detection systems
is the curation of representative datasets for training and evaluation. The
handcrafted creation of representative presentation attack samples often
requires expertise and is very time-consuming, thus an automatic process of
obtaining high-quality data is highly desirable. This work explores ID card
Presentation Attack Instruments (PAI) in order to improve the generation of
samples with four Generative Adversarial Networks (GANs) based image
translation models and analyses the effectiveness of the generated data for
training fraud detection systems. Using open-source data, we show that
synthetic attack presentations are an adequate complement for additional real
attack presentations, where we obtain an EER performance increase of 0.63%
points for print attacks and a loss of 0.29% for screen capture attacks.
Related papers
- Undermining Image and Text Classification Algorithms Using Adversarial Attacks [0.0]
Our study addresses the gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models.
Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy.
arXiv Detail & Related papers (2024-11-03T18:44:28Z) - Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - Contactless Fingerprint Biometric Anti-Spoofing: An Unsupervised Deep
Learning Approach [0.0]
We introduce an innovative anti-spoofing approach that combines an unsupervised autoencoder with a convolutional block attention module.
The scheme has achieved an average BPCER of 0.96% with an APCER of 1.6% for presentation attacks involving various types of spoofed samples.
arXiv Detail & Related papers (2023-11-07T17:19:59Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - An Open Patch Generator based Fingerprint Presentation Attack Detection
using Generative Adversarial Network [3.5558308387389626]
Presentation Attack (PA) or spoofing is one of the threats caused by presenting a spoof of a genuine fingerprint to the sensor of Automatic Fingerprint Recognition Systems (AFRS)
This paper proposes a CNN based technique that uses a Generative Adversarial Network (GAN) to augment the dataset with spoof samples generated from the proposed Open Patch Generator (OPG)
An overall accuracy of 96.20%, 94.97%, and 92.90% has been achieved on the LivDet 2015, 2017, and 2019 databases, respectively under the LivDet protocol scenarios.
arXiv Detail & Related papers (2023-06-06T10:52:06Z) - An Efficient Ensemble Explainable AI (XAI) Approach for Morphed Face
Detection [1.2599533416395763]
We present a novel visual explanation approach named Ensemble XAI to provide a more comprehensive visual explanation for a deep learning prognostic model (EfficientNet-Grad1)
The experiments have been performed on three publicly available datasets namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack (WMCA) and Makeup Induced Face Spoofing (MIFS)
arXiv Detail & Related papers (2023-04-23T13:43:06Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Synthetic ID Card Image Generation for Improving Presentation Attack
Detection [12.232059909207578]
This work explores three methods for synthetically generating ID card images to increase the amount of data while training fraud-detection networks.
Our results indicate that databases can be supplemented with synthetic images without any loss in performance for the print/scan Presentation Attack Instrument Species (PAIS) and a loss in performance of 1% for the screen capture PAIS.
arXiv Detail & Related papers (2022-10-31T19:07:30Z) - SpoofGAN: Synthetic Fingerprint Spoof Images [47.87570819350573]
A major limitation to advances in fingerprint spoof detection is the lack of publicly available, large-scale fingerprint spoof datasets.
This work aims to demonstrate the utility of synthetic (both live and spoof) fingerprints in supplying these algorithms with sufficient data.
arXiv Detail & Related papers (2022-04-13T16:27:27Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.