Conditional Synthetic Live and Spoof Fingerprint Generation
- URL: http://arxiv.org/abs/2510.17035v1
- Date: Sun, 19 Oct 2025 22:44:21 GMT
- Title: Conditional Synthetic Live and Spoof Fingerprint Generation
- Authors: Syed Konain Abbas, Sandip Purnapatra, M. G. Sarwar Murshed, Conor Miller-Lynch, Lambert Igene, Soumyabrata Dey, Stephanie Schuckers, Faraz Hussain,
- Abstract summary: Large fingerprint datasets are time-consuming and expensive to collect and require strict privacy measures.<n>This paper presents a novel approach for generating synthetic fingerprint images (both spoof and live)<n>We employ CycleGANs to translate these into realistic spoof fingerprints.<n>These synthetic spoof fingerprints are crucial for developing robust spoof detection systems.
- Score: 0.6305266318624044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large fingerprint datasets, while important for training and evaluation, are time-consuming and expensive to collect and require strict privacy measures. Researchers are exploring the use of synthetic fingerprint data to address these issues. This paper presents a novel approach for generating synthetic fingerprint images (both spoof and live), addressing concerns related to privacy, cost, and accessibility in biometric data collection. Our approach utilizes conditional StyleGAN2-ADA and StyleGAN3 architectures to produce high-resolution synthetic live fingerprints, conditioned on specific finger identities (thumb through little finger). Additionally, we employ CycleGANs to translate these into realistic spoof fingerprints, simulating a variety of presentation attack materials (e.g., EcoFlex, Play-Doh). These synthetic spoof fingerprints are crucial for developing robust spoof detection systems. Through these generative models, we created two synthetic datasets (DB2 and DB3), each containing 1,500 fingerprint images of all ten fingers with multiple impressions per finger, and including corresponding spoofs in eight material types. The results indicate robust performance: our StyleGAN3 model achieves a Fr\'echet Inception Distance (FID) as low as 5, and the generated fingerprints achieve a True Accept Rate of 99.47% at a 0.01% False Accept Rate. The StyleGAN2-ADA model achieved a TAR of 98.67% at the same 0.01% FAR. We assess fingerprint quality using standard metrics (NFIQ2, MINDTCT), and notably, matching experiments confirm strong privacy preservation, with no significant evidence of identity leakage, confirming the strong privacy-preserving properties of our synthetic datasets.
Related papers
- FPGAN-Control: A Controllable Fingerprint Generator for Training with
Synthetic Data [7.203557048672379]
We present FPGAN-Control, an identity preserving image generation framework.
We introduce a novel appearance loss that encourages disentanglement between the fingerprint's identity and appearance properties.
We demonstrate the merits of FPGAN-Control, both quantitatively and qualitatively, in terms of identity level, degree of appearance control, and low synthetic-to-real domain gap.
arXiv Detail & Related papers (2023-10-29T14:30:01Z) - Synthetic Latent Fingerprint Generation Using Style Transfer [6.530917936319386]
We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints.
Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact-based fingerprints.
arXiv Detail & Related papers (2023-09-27T15:47:00Z) - A Universal Latent Fingerprint Enhancer Using Transformers [47.87570819350573]
This study aims to develop a fast method, which we call ULPrint, to enhance various latent fingerprint types.
In closed-set identification accuracy experiments, the enhanced image was able to improve the performance of the MSU-AFIS from 61.56% to 75.19%.
arXiv Detail & Related papers (2023-05-31T23:01:11Z) - Synthetic Latent Fingerprint Generator [47.87570819350573]
Given a full fingerprint image (rolled or slap), we present CycleGAN models to generate multiple latent impressions of the same identity as the full print.
Our models can control the degree of distortion, noise, blurriness and occlusion in the generated latent print images.
Our approach for generating synthetic latent fingerprints can be used to improve the recognition performance of any latent matcher.
arXiv Detail & Related papers (2022-08-29T18:02:02Z) - Hierarchical Perceptual Noise Injection for Social Media Fingerprint
Privacy Protection [106.5308793283895]
fingerprint leakage from social media raises a strong desire for anonymizing shared images.
To guard the fingerprint leakage, adversarial attack emerges as a solution by adding imperceptible perturbations on images.
We propose FingerSafe, a hierarchical perceptual protective noise injection framework to address the mentioned problems.
arXiv Detail & Related papers (2022-08-23T02:20:46Z) - SpoofGAN: Synthetic Fingerprint Spoof Images [47.87570819350573]
A major limitation to advances in fingerprint spoof detection is the lack of publicly available, large-scale fingerprint spoof datasets.
This work aims to demonstrate the utility of synthetic (both live and spoof) fingerprints in supplying these algorithms with sufficient data.
arXiv Detail & Related papers (2022-04-13T16:27:27Z) - Synthesis and Reconstruction of Fingerprints using Generative
Adversarial Networks [6.700873164609009]
We propose a novel fingerprint synthesis and reconstruction framework based on the StyleGan2 architecture.
We also derive a computational approach to modify the attributes of the generated fingerprint while preserving their identity.
The proposed framework was experimentally shown to outperform contemporary state-of-the-art approaches for both fingerprint synthesis and reconstruction.
arXiv Detail & Related papers (2022-01-17T00:18:00Z) - PrintsGAN: Synthetic Fingerprint Generator [39.804969475699345]
PrintsGAN is a synthetic fingerprint generator capable of generating unique fingerprints along with multiple impressions for a given fingerprint.
We show the utility of the PrintsGAN generated by training a deep network to extract a fixed-length embedding from a fingerprint.
arXiv Detail & Related papers (2022-01-10T22:25:10Z) - A Comparative Study of Fingerprint Image-Quality Estimation Methods [54.84936551037727]
Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system.
In this work, we review existing approaches for fingerprint image-quality estimation.
We have also tested a selection of fingerprint image-quality estimation algorithms.
arXiv Detail & Related papers (2021-11-14T19:53:12Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.