Gradient-Guided Exploration of Generative Model's Latent Space for Controlled Iris Image Augmentations
- URL: http://arxiv.org/abs/2511.09749v1
- Date: Fri, 14 Nov 2025 01:07:41 GMT
- Title: Gradient-Guided Exploration of Generative Model's Latent Space for Controlled Iris Image Augmentations
- Authors: Mahsa Mitcheff, Siamul Karim Khan, Adam Czajka,
- Abstract summary: We introduce a new iris image augmentation strategy by traversing a generative model's latent space toward latent codes.<n>The proposed approach can be easily extended to manipulate any attribute for which a differentiable loss term can be formulated.<n>We can utilize GAN inversion to project any given iris image into the latent space and obtain its corresponding latent code.
- Score: 3.6245424131171813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing reliable iris recognition and presentation attack detection methods requires diverse datasets that capture realistic variations in iris features and a wide spectrum of anomalies. Because of the rich texture of iris images, which spans a wide range of spatial frequencies, synthesizing same-identity iris images while controlling specific attributes remains challenging. In this work, we introduce a new iris image augmentation strategy by traversing a generative model's latent space toward latent codes that represent same-identity samples but with some desired iris image properties manipulated. The latent space traversal is guided by a gradient of specific geometrical, textural, or quality-related iris image features (e.g., sharpness, pupil size, iris size, or pupil-to-iris ratio) and preserves the identity represented by the image being manipulated. The proposed approach can be easily extended to manipulate any attribute for which a differentiable loss term can be formulated. Additionally, our approach can use either randomly generated images using either a pre-train GAN model or real-world iris images. We can utilize GAN inversion to project any given iris image into the latent space and obtain its corresponding latent code.
Related papers
- On the Feasibility of Creating Iris Periocular Morphed Images [9.021226651004055]
This work proposes an end-to-end framework to produce iris morphs at the image level.
It considers different stages such as pair subject selection, segmentation, morph creation, and a new iris recognition system.
The results show that this approach obtained very realistic images that can confuse conventional iris recognition systems.
arXiv Detail & Related papers (2024-08-24T06:48:46Z) - Synthesizing Iris Images using Generative Adversarial Networks: Survey and Comparative Analysis [11.5164036021499]
We present a review of state-of-the-art GAN-based synthetic iris image generation techniques.
We first survey the various methods that have been used for synthetic iris generation and specifically consider generators based on StyleGAN, RaSGAN, CIT-GAN, iWarpGAN, StarGAN, etc.
arXiv Detail & Related papers (2024-04-26T01:45:58Z) - EyePreserve: Identity-Preserving Iris Synthesis [8.468443367440052]
This paper presents the first method of fully data-driven, identity-preserving, pupil size-varying synthesis of iris images.<n>It is capable of synthesizing images of irises with different pupil sizes representing non-existing identities, as well as non-linearly deforming the texture of iris images of existing subjects.<n>Iris recognition experiments suggest that the proposed deformation model both preserves the identity when changing the pupil size, and offers better similarity between same-identity iris samples with significant differences in pupil size.
arXiv Detail & Related papers (2023-12-19T10:29:29Z) - iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris
Images [13.60510525958336]
iWarpGAN generates iris images with both inter- and intra-class variations.
The utility of the synthetically generated images is demonstrated by improving the performance of deep learning based iris matchers.
arXiv Detail & Related papers (2023-05-21T23:10:14Z) - Super-Resolution and Image Re-projection for Iris Recognition [67.42500312968455]
Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images.
In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment.
Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems.
arXiv Detail & Related papers (2022-10-20T09:46:23Z) - DeformIrisNet: An Identity-Preserving Model of Iris Texture Deformation [4.142375560633827]
In dominant approaches to iris recognition, the size of a ring-shaped iris region is linearly scaled to a canonical rectangle.
We propose a novel deep autoencoder-based model that can effectively learn complex movements of iris texture features directly from the data.
arXiv Detail & Related papers (2022-07-18T23:23:23Z) - Iris Recognition Based on SIFT Features [63.07521951102555]
We use the Scale Invariant Feature Transformation (SIFT) for recognition using iris images.
We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator.
We also show the complement between the SIFT approach and a popular matching approach based on transformation to polar coordinates and Log-Gabor wavelets.
arXiv Detail & Related papers (2021-10-30T04:55:33Z) - Toward Accurate and Reliable Iris Segmentation Using Uncertainty
Learning [96.72850130126294]
We propose an Iris U-transformer (IrisUsformer) for accurate and reliable iris segmentation.
For better accuracy, we elaborately design IrisUsformer by adopting position-sensitive operation and re-packaging transformer block.
We show that IrisUsformer achieves better segmentation accuracy using 35% MACs of the SOTA IrisParseNet.
arXiv Detail & Related papers (2021-10-20T01:37:19Z) - DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition [85.94331736287765]
We formulate HFR as a dual generation problem, and tackle it via a novel Dual Variational Generation (DVG-Face) framework.
We integrate abundant identity information of large-scale visible data into the joint distribution.
Massive new diverse paired heterogeneous images with the same identity can be generated from noises.
arXiv Detail & Related papers (2020-09-20T09:48:24Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.