A Uniform Representation Learning Method for OCT-based Fingerprint
Presentation Attack Detection and Reconstruction
- URL: http://arxiv.org/abs/2209.12208v1
- Date: Sun, 25 Sep 2022 12:31:40 GMT
- Title: A Uniform Representation Learning Method for OCT-based Fingerprint
Presentation Attack Detection and Reconstruction
- Authors: Wentian Zhang, Haozhe Liu, Feng Liu, Raghavendra Ramachandra
- Abstract summary: Presentation Attack Detection (PAD) and subsurface fingerprint reconstruction based on depth information are treated as two independent branches.
This paper proposes a uniform representation model for OCT-based fingerprint PAD and subsurface fingerprint reconstruction.
- Score: 8.764696645299603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The technology of optical coherence tomography (OCT) to fingerprint imaging
opens up a new research potential for fingerprint recognition owing to its
ability to capture depth information of the skin layers. Developing robust and
high security Automated Fingerprint Recognition Systems (AFRSs) are possible if
the depth information can be fully utilized. However, in existing studies,
Presentation Attack Detection (PAD) and subsurface fingerprint reconstruction
based on depth information are treated as two independent branches, resulting
in high computation and complexity of AFRS building.Thus, this paper proposes a
uniform representation model for OCT-based fingerprint PAD and subsurface
fingerprint reconstruction. Firstly, we design a novel semantic segmentation
network which only trained by real finger slices of OCT-based fingerprints to
extract multiple subsurface structures from those slices (also known as
B-scans). The latent codes derived from the network are directly used to
effectively detect the PA since they contain abundant subsurface biological
information, which is independent with PA materials and has strong robustness
for unknown PAs. Meanwhile, the segmented subsurface structures are adopted to
reconstruct multiple subsurface 2D fingerprints. Recognition can be easily
achieved by using existing mature technologies based on traditional 2D
fingerprints. Extensive experiments are carried on our own established
database, which is the largest public OCT-based fingerprint database with 2449
volumes. In PAD task, our method can improve 0.33% Acc from the
state-of-the-art method. For reconstruction performance, our method achieves
the best performance with 0.834 mIOU and 0.937 PA. By comparing with the
recognition performance on surface 2D fingerprints, the effectiveness of our
proposed method on high quality subsurface fingerprint reconstruction is
further proved.
Related papers
- Latent fingerprint enhancement for accurate minutiae detection [8.996826918574463]
We propose a novel approach that uses generative adversary networks (GANs) to redefine Latent Fingerprint Enhancement (LFE)
By directly optimising the minutiae information during the generation process, the model produces enhanced latent fingerprints that exhibit exceptional fidelity to ground-truth instances.
Our framework integrates minutiae locations and orientation fields, ensuring the preservation of both local and structural fingerprint features.
arXiv Detail & Related papers (2024-09-18T08:35:31Z) - Finger-UNet: A U-Net based Multi-Task Architecture for Deep Fingerprint
Enhancement [0.0]
fingerprint enhancement plays a vital role in the early stages of the fingerprint recognition/verification pipeline.
We suggest intuitive modifications to U-Net to enhance low-quality fingerprints effectively.
We replace regular convolutions with depthwise separable convolutions, which significantly reduces the memory footprint of the model.
arXiv Detail & Related papers (2023-10-01T09:49:10Z) - A Universal Latent Fingerprint Enhancer Using Transformers [47.87570819350573]
This study aims to develop a fast method, which we call ULPrint, to enhance various latent fingerprint types.
In closed-set identification accuracy experiments, the enhanced image was able to improve the performance of the MSU-AFIS from 61.56% to 75.19%.
arXiv Detail & Related papers (2023-05-31T23:01:11Z) - Advancing 3D finger knuckle recognition via deep feature learning [51.871256510747465]
Contactless 3D finger knuckle patterns have emerged as an effective biometric identifier due to its discriminativeness, visibility from a distance, and convenience.
Recent research has developed a deep feature collaboration network which simultaneously incorporates intermediate features from deep neural networks with multiple scales.
This paper advances this approach by investigating the possibility of learning a discriminative feature vector with the least possible dimension for representing 3D finger knuckle images.
arXiv Detail & Related papers (2023-01-07T20:55:16Z) - AFR-Net: Attention-Driven Fingerprint Recognition Network [47.87570819350573]
We improve initial studies on the use of vision transformers (ViT) for biometric recognition, including fingerprint recognition.
We propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations.
This strategy can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance.
arXiv Detail & Related papers (2022-11-25T05:10:39Z) - Synthetic Latent Fingerprint Generator [47.87570819350573]
Given a full fingerprint image (rolled or slap), we present CycleGAN models to generate multiple latent impressions of the same identity as the full print.
Our models can control the degree of distortion, noise, blurriness and occlusion in the generated latent print images.
Our approach for generating synthetic latent fingerprints can be used to improve the recognition performance of any latent matcher.
arXiv Detail & Related papers (2022-08-29T18:02:02Z) - FIGO: Enhanced Fingerprint Identification Approach Using GAN and One
Shot Learning Techniques [0.0]
We propose a Fingerprint Identification approach based on Generative adversarial network and One-shot learning techniques.
First, we propose a Pix2Pix model to transform low-quality fingerprint images to a higher level of fingerprint images pixel by pixel directly in the fingerprint enhancement tier.
Second, we construct a fully automated fingerprint feature extraction model using a one-shot learning approach to differentiate each fingerprint from the others in the fingerprint identification process.
arXiv Detail & Related papers (2022-08-11T02:45:42Z) - SpoofGAN: Synthetic Fingerprint Spoof Images [47.87570819350573]
A major limitation to advances in fingerprint spoof detection is the lack of publicly available, large-scale fingerprint spoof datasets.
This work aims to demonstrate the utility of synthetic (both live and spoof) fingerprints in supplying these algorithms with sufficient data.
arXiv Detail & Related papers (2022-04-13T16:27:27Z) - Synthesis and Reconstruction of Fingerprints using Generative
Adversarial Networks [6.700873164609009]
We propose a novel fingerprint synthesis and reconstruction framework based on the StyleGan2 architecture.
We also derive a computational approach to modify the attributes of the generated fingerprint while preserving their identity.
The proposed framework was experimentally shown to outperform contemporary state-of-the-art approaches for both fingerprint synthesis and reconstruction.
arXiv Detail & Related papers (2022-01-17T00:18:00Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.