Generative Iris Prior Embedded Transformer for Iris Restoration
- URL: http://arxiv.org/abs/2407.00261v1
- Date: Fri, 28 Jun 2024 23:20:57 GMT
- Title: Generative Iris Prior Embedded Transformer for Iris Restoration
- Authors: Yubo Huang, Jia Wang, Peipei Li, Liuyu Xiang, Peigang Li, Zhaofeng He,
- Abstract summary: We propose a generative iris prior embedded Transformer model (Gformer)
We tame Transformer blocks to model long-range dependencies in target images.
Second, we pretrain an iris generative adversarial network (GAN) to obtain the rich iris prior, and incorporate it into the iris restoration process with our iris feature modulator.
- Score: 6.616142716765673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Iris restoration from complexly degraded iris images, aiming to improve iris recognition performance, is a challenging problem. Due to the complex degradation, directly training a convolutional neural network (CNN) without prior cannot yield satisfactory results. In this work, we propose a generative iris prior embedded Transformer model (Gformer), in which we build a hierarchical encoder-decoder network employing Transformer block and generative iris prior. First, we tame Transformer blocks to model long-range dependencies in target images. Second, we pretrain an iris generative adversarial network (GAN) to obtain the rich iris prior, and incorporate it into the iris restoration process with our iris feature modulator. Our experiments demonstrate that the proposed Gformer outperforms state-of-the-art methods. Besides, iris recognition performance has been significantly improved after applying Gformer.
Related papers
- A Prior Embedding-Driven Architecture for Long Distance Blind Iris Recognition [5.482786561272011]
We propose a prior embedding-driven architecture for long distance blind iris recognition.
We first proposed a blind iris image restoration network called Iris-PPRGAN.
To effectively restore the texture of the blind iris, Iris-PPRGAN includes a Generative Adrial Network (GAN) used as a Prior Decoder, and a DNN used as the encoder.
arXiv Detail & Related papers (2024-08-01T00:40:17Z) - Training Transformer Models by Wavelet Losses Improves Quantitative and Visual Performance in Single Image Super-Resolution [6.367865391518726]
Transformer-based models have achieved remarkable results in low-level vision tasks including image super-resolution (SR)
To activate more input pixels globally, hybrid attention models have been proposed.
We employ wavelet losses to train Transformer models to improve quantitative and subjective performance.
arXiv Detail & Related papers (2024-04-17T11:25:19Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris
Images [13.60510525958336]
iWarpGAN generates iris images with both inter- and intra-class variations.
The utility of the synthetically generated images is demonstrated by improving the performance of deep learning based iris matchers.
arXiv Detail & Related papers (2023-05-21T23:10:14Z) - Artificial Pupil Dilation for Data Augmentation in Iris Semantic
Segmentation [0.0]
Modern approaches to iris recognition utilize deep learning to segment the valid portion of the iris from the rest of the eye.
This paper aims to improve the accuracy of iris semantic segmentation systems by introducing a novel data augmentation technique.
arXiv Detail & Related papers (2022-12-24T13:31:56Z) - Super-Resolution and Image Re-projection for Iris Recognition [67.42500312968455]
Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images.
In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment.
Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems.
arXiv Detail & Related papers (2022-10-20T09:46:23Z) - Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution
and Matcher Fusion [69.53542497693086]
We evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches.
Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance.
arXiv Detail & Related papers (2022-10-18T11:25:19Z) - Restormer: Efficient Transformer for High-Resolution Image Restoration [118.9617735769827]
convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data.
Transformers have shown significant performance gains on natural language and high-level vision tasks.
Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks.
arXiv Detail & Related papers (2021-11-18T18:59:10Z) - The Nuts and Bolts of Adopting Transformer in GANs [124.30856952272913]
We investigate the properties of Transformer in the generative adversarial network (GAN) framework for high-fidelity image synthesis.
Our study leads to a new alternative design of Transformers in GAN, a convolutional neural network (CNN)-free generator termed as STrans-G.
arXiv Detail & Related papers (2021-10-25T17:01:29Z) - Toward Accurate and Reliable Iris Segmentation Using Uncertainty
Learning [96.72850130126294]
We propose an Iris U-transformer (IrisUsformer) for accurate and reliable iris segmentation.
For better accuracy, we elaborately design IrisUsformer by adopting position-sensitive operation and re-packaging transformer block.
We show that IrisUsformer achieves better segmentation accuracy using 35% MACs of the SOTA IrisParseNet.
arXiv Detail & Related papers (2021-10-20T01:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.