Analysis of Master Vein Attacks on Finger Vein Recognition Systems
- URL: http://arxiv.org/abs/2210.10667v1
- Date: Tue, 18 Oct 2022 06:36:59 GMT
- Title: Analysis of Master Vein Attacks on Finger Vein Recognition Systems
- Authors: Huy H. Nguyen, Trung-Nghia Le, Junichi Yamagishi, and Isao Echizen
- Abstract summary: Finger vein recognition (FVR) systems have been commercially used, especially in ATMs, for customer verification.
It is essential to measure their robustness against various attack methods, especially when a hand-crafted FVR system is used without any countermeasure methods.
We are the first in the literature to introduce master vein attacks in which we craft a vein-looking image so that it can falsely match with as many identities as possible.
- Score: 42.63580709376905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finger vein recognition (FVR) systems have been commercially used, especially
in ATMs, for customer verification. Thus, it is essential to measure their
robustness against various attack methods, especially when a hand-crafted FVR
system is used without any countermeasure methods. In this paper, we are the
first in the literature to introduce master vein attacks in which we craft a
vein-looking image so that it can falsely match with as many identities as
possible by the FVR systems. We present two methods for generating master veins
for use in attacking these systems. The first uses an adaptation of the latent
variable evolution algorithm with a proposed generative model (a multi-stage
combination of beta-VAE and WGAN-GP models). The second uses an adversarial
machine learning attack method to attack a strong surrogate CNN-based
recognition system. The two methods can be easily combined to boost their
attack ability. Experimental results demonstrated that the proposed methods
alone and together achieved false acceptance rates up to 73.29% and 88.79%,
respectively, against Miura's hand-crafted FVR system. We also point out that
Miura's system is easily compromised by non-vein-looking samples generated by a
WGAN-GP model with false acceptance rates up to 94.21%. The results raise the
alarm about the robustness of such systems and suggest that master vein attacks
should be considered an important security measure.
Related papers
- Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - ViT Unified: Joint Fingerprint Recognition and Presentation Attack
Detection [36.05807963935458]
We leverage a vision transformer architecture for joint spoof detection and matching.
We report competitive results with state-of-the-art (SOTA) models for both a sequential system and a unified architecture.
We demonstrate the capability of our unified model to achieve an average integrated matching (IM) accuracy of 98.87% across LivDet 2013 and 2015 CrossMatch sensors.
arXiv Detail & Related papers (2023-05-12T16:51:14Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing [5.675436513661266]
Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties.
Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples.
This paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model.
arXiv Detail & Related papers (2022-02-16T00:23:25Z) - Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT [5.077661193116692]
Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought.
Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy.
Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision.
arXiv Detail & Related papers (2021-04-26T09:36:29Z) - Anomaly Detection with Convolutional Autoencoders for Fingerprint
Presentation Attack Detection [11.879849130630406]
Presentation attack detection (PAD) methods are used to determine whether samples stem from a bona fide subject or from a presentation attack instrument (PAI)
We propose a new PAD technique based on autoencoders (AEs) trained only on bona fide samples (i.e. one-class) captured in the short wave infrared domain.
arXiv Detail & Related papers (2020-08-18T15:33:41Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z) - On the Resilience of Biometric Authentication Systems against Random
Inputs [6.249167635929514]
We assess the security of machine learning based biometric authentication systems against an attacker who submits uniform random inputs.
In particular, for one reconstructed biometric system with an average FPR of 0.03, the success rate was as high as 0.78.
arXiv Detail & Related papers (2020-01-13T04:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.