Universal Fingerprint Generation: Controllable Diffusion Model with Multimodal Conditions
- URL: http://arxiv.org/abs/2404.13791v1
- Date: Sun, 21 Apr 2024 23:01:08 GMT
- Title: Universal Fingerprint Generation: Controllable Diffusion Model with Multimodal Conditions
- Authors: Steven A. Grosz, Anil K. Jain,
- Abstract summary: GenPrint is a framework to produce fingerprint images of various types while maintaining identity.
GenPrint is not confined to replicating style characteristics from the training dataset alone.
Results demonstrate the benefits of GenPrint in terms of identity preservation, explainable control, and universality of generated images.
- Score: 25.738682467090335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The utilization of synthetic data for fingerprint recognition has garnered increased attention due to its potential to alleviate privacy concerns surrounding sensitive biometric data. However, current methods for generating fingerprints have limitations in creating impressions of the same finger with useful intra-class variations. To tackle this challenge, we present GenPrint, a framework to produce fingerprint images of various types while maintaining identity and offering humanly understandable control over different appearance factors such as fingerprint class, acquisition type, sensor device, and quality level. Unlike previous fingerprint generation approaches, GenPrint is not confined to replicating style characteristics from the training dataset alone: it enables the generation of novel styles from unseen devices without requiring additional fine-tuning. To accomplish these objectives, we developed GenPrint using latent diffusion models with multimodal conditions (text and image) for consistent generation of style and identity. Our experiments leverage a variety of publicly available datasets for training and evaluation. Results demonstrate the benefits of GenPrint in terms of identity preservation, explainable control, and universality of generated images. Importantly, the GenPrint-generated images yield comparable or even superior accuracy to models trained solely on real data and further enhances performance when augmenting the diversity of existing real fingerprint datasets.
Related papers
- Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques [0.44739156031315924]
We generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof.
We assess the diversity and realism of the generated live fingerprint images mainly through the Fr'echet Inception Distance (FID) and the False Acceptance Rate (FAR)
arXiv Detail & Related papers (2024-03-20T18:36:30Z) - DiffFinger: Advancing Synthetic Fingerprint Generation through Denoising Diffusion Probabilistic Models [0.0]
This study explores the generation of synthesized fingerprint images using Denoising Diffusion Probabilistic Models (DDPMs)
Our results reveal that DiffFinger not only competes with authentic training set data in quality but also provides a richer set of biometric data, reflecting true-to-life variability.
arXiv Detail & Related papers (2024-03-15T14:34:29Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - FPGAN-Control: A Controllable Fingerprint Generator for Training with
Synthetic Data [7.203557048672379]
We present FPGAN-Control, an identity preserving image generation framework.
We introduce a novel appearance loss that encourages disentanglement between the fingerprint's identity and appearance properties.
We demonstrate the merits of FPGAN-Control, both quantitatively and qualitatively, in terms of identity level, degree of appearance control, and low synthetic-to-real domain gap.
arXiv Detail & Related papers (2023-10-29T14:30:01Z) - Synthetic Latent Fingerprint Generation Using Style Transfer [6.530917936319386]
We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints.
Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact-based fingerprints.
arXiv Detail & Related papers (2023-09-27T15:47:00Z) - RFDforFin: Robust Deep Forgery Detection for GAN-generated Fingerprint
Images [45.73061833269094]
We propose the first deep forgery detection approach for fingerprint images, which combines unique ridge features of fingerprint and generation artifacts of the GAN-generated images.
Our proposed approach is effective and robust with low complexities.
arXiv Detail & Related papers (2023-08-18T04:05:18Z) - SpoofGAN: Synthetic Fingerprint Spoof Images [47.87570819350573]
A major limitation to advances in fingerprint spoof detection is the lack of publicly available, large-scale fingerprint spoof datasets.
This work aims to demonstrate the utility of synthetic (both live and spoof) fingerprints in supplying these algorithms with sufficient data.
arXiv Detail & Related papers (2022-04-13T16:27:27Z) - Synthesis and Reconstruction of Fingerprints using Generative
Adversarial Networks [6.700873164609009]
We propose a novel fingerprint synthesis and reconstruction framework based on the StyleGan2 architecture.
We also derive a computational approach to modify the attributes of the generated fingerprint while preserving their identity.
The proposed framework was experimentally shown to outperform contemporary state-of-the-art approaches for both fingerprint synthesis and reconstruction.
arXiv Detail & Related papers (2022-01-17T00:18:00Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Random Network Distillation as a Diversity Metric for Both Image and
Text Generation [62.13444904851029]
We develop a new diversity metric that can be applied to data, both synthetic and natural, of any type.
We validate and deploy this metric on both images and text.
arXiv Detail & Related papers (2020-10-13T22:03:52Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.