iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris
Images
- URL: http://arxiv.org/abs/2305.12596v2
- Date: Wed, 30 Aug 2023 03:55:54 GMT
- Title: iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris
Images
- Authors: Shivangi Yadav and Arun Ross
- Abstract summary: iWarpGAN generates iris images with both inter- and intra-class variations.
The utility of the synthetically generated images is demonstrated by improving the performance of deep learning based iris matchers.
- Score: 13.60510525958336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have shown success in approximating
complex distributions for synthetic image generation. However, current
GAN-based methods for generating biometric images, such as iris, have certain
limitations: (a) the synthetic images often closely resemble images in the
training dataset; (b) the generated images lack diversity in terms of the
number of unique identities represented in them; and (c) it is difficult to
generate multiple images pertaining to the same identity. To overcome these
issues, we propose iWarpGAN that disentangles identity and style in the context
of the iris modality by using two transformation pathways: Identity
Transformation Pathway to generate unique identities from the training set, and
Style Transformation Pathway to extract the style code from a reference image
and output an iris image using this style. By concatenating the transformed
identity code and reference style code, iWarpGAN generates iris images with
both inter- and intra-class variations. The efficacy of the proposed method in
generating such iris DeepFakes is evaluated both qualitatively and
quantitatively using ISO/IEC 29794-6 Standard Quality Metrics and the VeriEye
iris matcher. Further, the utility of the synthetically generated images is
demonstrated by improving the performance of deep learning based iris matchers
that augment synthetic data with real data during the training process.
Related papers
- Fusion is all you need: Face Fusion for Customized Identity-Preserving Image Synthesis [7.099258248662009]
Text-to-image (T2I) models have significantly advanced the development of artificial intelligence.
However, existing T2I-based methods often struggle to accurately reproduce the appearance of individuals from a reference image.
We leverage the pre-trained UNet from Stable Diffusion to incorporate the target face image directly into the generation process.
arXiv Detail & Related papers (2024-09-27T19:31:04Z) - EyePreserve: Identity-Preserving Iris Synthesis [8.973296574093506]
This paper presents the first method of fully data-driven, identity-preserving, pupil size-varying synthesis of iris images.
Two immediate applications of the proposed approach are: (a) synthesis of, or enhancement of the existing biometric datasets for iris recognition, and (b) helping forensic human experts in examining iris image pairs with significant differences in pupil dilation.
arXiv Detail & Related papers (2023-12-19T10:29:29Z) - T-Person-GAN: Text-to-Person Image Generation with Identity-Consistency
and Manifold Mix-Up [16.165889084870116]
We present an end-to-end approach to generate high-resolution person images conditioned on texts only.
We develop an effective generative model to produce person images with two novel mechanisms.
arXiv Detail & Related papers (2022-08-18T07:41:02Z) - DeformIrisNet: An Identity-Preserving Model of Iris Texture Deformation [4.142375560633827]
In dominant approaches to iris recognition, the size of a ring-shaped iris region is linearly scaled to a canonical rectangle.
We propose a novel deep autoencoder-based model that can effectively learn complex movements of iris texture features directly from the data.
arXiv Detail & Related papers (2022-07-18T23:23:23Z) - Iris Recognition Based on SIFT Features [63.07521951102555]
We use the Scale Invariant Feature Transformation (SIFT) for recognition using iris images.
We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator.
We also show the complement between the SIFT approach and a popular matching approach based on transformation to polar coordinates and Log-Gabor wavelets.
arXiv Detail & Related papers (2021-10-30T04:55:33Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition [85.94331736287765]
We formulate HFR as a dual generation problem, and tackle it via a novel Dual Variational Generation (DVG-Face) framework.
We integrate abundant identity information of large-scale visible data into the joint distribution.
Massive new diverse paired heterogeneous images with the same identity can be generated from noises.
arXiv Detail & Related papers (2020-09-20T09:48:24Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.