Spoofing Detection on Hand Images Using Quality Assessment
- URL: http://arxiv.org/abs/2110.12923v1
- Date: Fri, 22 Oct 2021 10:06:53 GMT
- Title: Spoofing Detection on Hand Images Using Quality Assessment
- Authors: Asish Bera, Ratnadeep Dey, Debotosh Bhattacharjee, Mita Nasipuri, and
Hubert P. H. Shum
- Abstract summary: This paper presents an anti-spoofing method toward hand biometrics.
A presentation attack detection approach is addressed by assessing the visual quality of genuine and fake hand images.
Ten quality metrics are measured from each sample for classification between original and fake hand image.
- Score: 21.58895176617405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research on biometrics focuses on achieving a high success rate of
authentication and addressing the concern of various spoofing attacks. Although
hand geometry recognition provides adequate security over unauthorized access,
it is susceptible to presentation attack. This paper presents an anti-spoofing
method toward hand biometrics. A presentation attack detection approach is
addressed by assessing the visual quality of genuine and fake hand images. A
threshold-based gradient magnitude similarity quality metric is proposed to
discriminate between the real and spoofed hand samples. The visual hand images
of 255 subjects from the Bogazici University hand database are considered as
original samples. Correspondingly, from each genuine sample, we acquire a
forged image using a Canon EOS 700D camera. Such fake hand images with natural
degradation are considered for electronic screen display based spoofing attack
detection. Furthermore, we create another fake hand dataset with artificial
degradation by introducing additional Gaussian blur, salt and pepper, and
speckle noises to original images. Ten quality metrics are measured from each
sample for classification between original and fake hand image. The
classification experiments are performed using the k-nearest neighbors, random
forest, and support vector machine classifiers, as well as deep convolutional
neural networks. The proposed gradient similarity-based quality metric achieves
1.5% average classification er ror using the k-nearest neighbors and random
forest classifiers. An average classification error of 2.5% is obtained using
the baseline evaluation with the MobileNetV2 deep network for discriminating
original and different types of fake hand samples.
Related papers
- Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Individualized Deepfake Detection Exploiting Traces Due to Double
Neural-Network Operations [32.33331065408444]
Existing deepfake detectors are not optimized for this detection task when an image is associated with a specific and identifiable individual.
This study focuses on the deepfake detection of facial images of individual public figures.
We demonstrate that the detection performance can be improved by exploiting the idempotency property of neural networks.
arXiv Detail & Related papers (2023-12-13T10:21:00Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Presentation Attack detection using Wavelet Transform and Deep Residual
Neural Net [5.425986555749844]
Biometric substances can be deceived by the imposters in several ways.
The bio-metric images, especially the iris and face, are vulnerable to different presentation attacks.
This research applies deep learning approaches to mitigate presentation attacks in a biometric access control system.
arXiv Detail & Related papers (2023-11-23T20:21:49Z) - A Universal Anti-Spoofing Approach for Contactless Fingerprint Biometric
Systems [0.0]
We propose a universal presentation attack detection method for contactless fingerprints.
We generated synthetic contactless fingerprints using StyleGAN from live finger photos and integrating them to train a semi-supervised ResNet-18 model.
A novel joint loss function, combining the Arcface and Center loss, is introduced with a regularization to balance between the two loss functions.
arXiv Detail & Related papers (2023-10-23T15:46:47Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - A Comparative Study of Fingerprint Image-Quality Estimation Methods [54.84936551037727]
Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system.
In this work, we review existing approaches for fingerprint image-quality estimation.
We have also tested a selection of fingerprint image-quality estimation algorithms.
arXiv Detail & Related papers (2021-11-14T19:53:12Z) - Exploring Adversarial Fake Images on Face Manifold [5.26916168336451]
Images synthesized by powerful generative adversarial network (GAN) based methods have drawn moral and privacy concerns.
In this paper, instead of adding adversarial noise, we optimally search adversarial points on face manifold to generate anti-forensic fake face images.
arXiv Detail & Related papers (2021-01-09T02:08:59Z) - DeepFake Detection Based on the Discrepancy Between the Face and its
Context [94.47879216590813]
We propose a method for detecting face swapping and other identity manipulations in single images.
Our approach involves two networks: (i) a face identification network that considers the face region bounded by a tight semantic segmentation, and (ii) a context recognition network that considers the face context.
We describe a method which uses the recognition signals from our two networks to detect such discrepancies.
Our method achieves state of the art results on the FaceForensics++, Celeb-DF-v2, and DFDC benchmarks for face manipulation detection, and even generalizes to detect fakes produced by unseen methods.
arXiv Detail & Related papers (2020-08-27T17:04:46Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.