FPGAN-Control: A Controllable Fingerprint Generator for Training with
Synthetic Data
- URL: http://arxiv.org/abs/2310.19024v1
- Date: Sun, 29 Oct 2023 14:30:01 GMT
- Title: FPGAN-Control: A Controllable Fingerprint Generator for Training with
Synthetic Data
- Authors: Alon Shoshan, Nadav Bhonker, Emanuel Ben Baruch, Ori Nizan, Igor
Kviatkovsky, Joshua Engelsma, Manoj Aggarwal, Gerard Medioni
- Abstract summary: We present FPGAN-Control, an identity preserving image generation framework.
We introduce a novel appearance loss that encourages disentanglement between the fingerprint's identity and appearance properties.
We demonstrate the merits of FPGAN-Control, both quantitatively and qualitatively, in terms of identity level, degree of appearance control, and low synthetic-to-real domain gap.
- Score: 7.203557048672379
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Training fingerprint recognition models using synthetic data has recently
gained increased attention in the biometric community as it alleviates the
dependency on sensitive personal data. Existing approaches for fingerprint
generation are limited in their ability to generate diverse impressions of the
same finger, a key property for providing effective data for training
recognition models. To address this gap, we present FPGAN-Control, an identity
preserving image generation framework which enables control over the
fingerprint's image appearance (e.g., fingerprint type, acquisition device,
pressure level) of generated fingerprints. We introduce a novel appearance loss
that encourages disentanglement between the fingerprint's identity and
appearance properties. In our experiments, we used the publicly available NIST
SD302 (N2N) dataset for training the FPGAN-Control model. We demonstrate the
merits of FPGAN-Control, both quantitatively and qualitatively, in terms of
identity preservation level, degree of appearance control, and low
synthetic-to-real domain gap. Finally, training recognition models using only
synthetic datasets generated by FPGAN-Control lead to recognition accuracies
that are on par or even surpass models trained using real data. To the best of
our knowledge, this is the first work to demonstrate this.
Related papers
- Universal Fingerprint Generation: Controllable Diffusion Model with Multimodal Conditions [25.738682467090335]
GenPrint is a framework to produce fingerprint images of various types while maintaining identity.
GenPrint is not confined to replicating style characteristics from the training dataset alone.
Results demonstrate the benefits of GenPrint in terms of identity preservation, explainable control, and universality of generated images.
arXiv Detail & Related papers (2024-04-21T23:01:08Z) - Synthetic Latent Fingerprint Generation Using Style Transfer [6.530917936319386]
We propose a simple and effective approach using style transfer and image blending to synthesize realistic latent fingerprints.
Our evaluation criteria and experiments demonstrate that the generated synthetic latent fingerprints preserve the identity information from the input contact-based fingerprints.
arXiv Detail & Related papers (2023-09-27T15:47:00Z) - AFR-Net: Attention-Driven Fingerprint Recognition Network [47.87570819350573]
We improve initial studies on the use of vision transformers (ViT) for biometric recognition, including fingerprint recognition.
We propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations.
This strategy can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance.
arXiv Detail & Related papers (2022-11-25T05:10:39Z) - Comparative analysis of segmentation and generative models for
fingerprint retrieval task [0.0]
Fingerprints deteriorate in quality if the fingers are dirty, wet, injured or when sensors malfunction.
This paper proposes a deep learning approach to address these issues using Generative (GAN) and models.
In our research, the u-net model performed better than the GAN networks.
arXiv Detail & Related papers (2022-09-13T17:21:14Z) - SpoofGAN: Synthetic Fingerprint Spoof Images [47.87570819350573]
A major limitation to advances in fingerprint spoof detection is the lack of publicly available, large-scale fingerprint spoof datasets.
This work aims to demonstrate the utility of synthetic (both live and spoof) fingerprints in supplying these algorithms with sufficient data.
arXiv Detail & Related papers (2022-04-13T16:27:27Z) - Synthesis and Reconstruction of Fingerprints using Generative
Adversarial Networks [6.700873164609009]
We propose a novel fingerprint synthesis and reconstruction framework based on the StyleGan2 architecture.
We also derive a computational approach to modify the attributes of the generated fingerprint while preserving their identity.
The proposed framework was experimentally shown to outperform contemporary state-of-the-art approaches for both fingerprint synthesis and reconstruction.
arXiv Detail & Related papers (2022-01-17T00:18:00Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.