Optimizer Sensitivity In Vision Transformerbased Iris Recognition: Adamw Vs Sgd Vs Rmsprop
- URL: http://arxiv.org/abs/2511.22994v1
- Date: Fri, 28 Nov 2025 08:56:52 GMT
- Title: Optimizer Sensitivity In Vision Transformerbased Iris Recognition: Adamw Vs Sgd Vs Rmsprop
- Authors: Moh Imam Faiz, Aviv Yuniar Rahman, Rangga Pahlevi Putra,
- Abstract summary: Iris recognition offers high reliability due to its distinctive and stable texture patterns.<n>Recent progress in deep learning, especially Vision Transformers ViT, has improved visual recognition performance.<n>This work evaluates how differents influence the accuracy and stability of ViT for iris recognition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The security of biometric authentication is increasingly critical as digital identity systems expand. Iris recognition offers high reliability due to its distinctive and stable texture patterns. Recent progress in deep learning, especially Vision Transformers ViT, has improved visual recognition performance. Yet, the effect of optimizer choice on ViT-based biometric systems remains understudied. This work evaluates how different optimizers influence the accuracy and stability of ViT for iris recognition, providing insights to enhance the robustness of biometric identification models.
Related papers
- Deep Learning Models for Robust Facial Liveness Detection [56.08694048252482]
This study introduces a robust solution through novel deep learning models addressing the deficiencies in contemporary anti-spoofing techniques.<n>By innovatively integrating texture analysis and reflective properties associated with genuine human traits, our models distinguish authentic presence from replicas with remarkable precision.
arXiv Detail & Related papers (2025-08-12T17:19:20Z) - Iris Style Transfer: Enhancing Iris Recognition with Style Features and Privacy Preservation through Neural Style Transfer [44.44776028287441]
Iris texture is widely regarded as a gold standard biometric modality for authentication and identification.<n>We propose using neural style transfer to obfuscate the identifiable iris style features.<n>This work opens new avenues for iris-oriented, secure, and privacy-aware biometric systems.
arXiv Detail & Related papers (2025-03-06T18:55:21Z) - Impact of Iris Pigmentation on Performance Bias in Visible Iris Verification Systems: A Comparative Study [6.639785884921617]
We investigate the impact of iris pigmentation on the efficacy of biometric recognition systems, focusing on a comparative analysis of blue and dark irises.
Our results indicate that iris recognition systems generally exhibit higher accuracy for blue irises compared to dark irises.
Our analysis identifies inherent biases in recognition performance related to iris color and cross-device variability.
arXiv Detail & Related papers (2024-11-13T10:15:27Z) - ChatGPT Meets Iris Biometrics [10.902536447343465]
This study utilizes the advanced capabilities of the GPT-4 multimodal Large Language Model (LLM) to explore its potential in iris recognition.
We investigate how well AI tools like ChatGPT can understand and analyze iris images.
Our findings suggest a promising path for future research and the development of more adaptable, efficient, robust and interactive biometric security solutions.
arXiv Detail & Related papers (2024-08-09T05:13:07Z) - ChangeViT: Unleashing Plain Vision Transformers for Change Detection [3.582733645632794]
ChangeViT is a framework that adopts a plain ViT backbone to enhance the performance of large-scale changes.
The framework achieves state-of-the-art performance on three popular high-resolution datasets.
arXiv Detail & Related papers (2024-06-18T17:59:08Z) - Embedding Non-Distortive Cancelable Face Template Generation [22.80706131626207]
We introduce an innovative image distortion technique that makes facial images unrecognizable to the eye but still identifiable by any custom embedding neural network model.
We test the reliability of biometric recognition networks by determining the maximum image distortion that does not change the predicted identity.
arXiv Detail & Related papers (2024-02-04T15:39:18Z) - Generalized Face Forgery Detection via Adaptive Learning for Pre-trained Vision Transformer [54.32283739486781]
We present a textbfForgery-aware textbfAdaptive textbfVision textbfTransformer (FA-ViT) under the adaptive learning paradigm.
FA-ViT achieves 93.83% and 78.32% AUC scores on Celeb-DF and DFDC datasets in the cross-dataset evaluation.
arXiv Detail & Related papers (2023-09-20T06:51:11Z) - Multimodal Adaptive Fusion of Face and Gait Features using Keyless
attention based Deep Neural Networks for Human Identification [67.64124512185087]
Soft biometrics such as gait are widely used with face in surveillance tasks like person recognition and re-identification.
We propose a novel adaptive multi-biometric fusion strategy for the dynamic incorporation of gait and face biometric cues by leveraging keyless attention deep neural networks.
arXiv Detail & Related papers (2023-03-24T05:28:35Z) - Iris Recognition Based on SIFT Features [63.07521951102555]
We use the Scale Invariant Feature Transformation (SIFT) for recognition using iris images.
We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator.
We also show the complement between the SIFT approach and a popular matching approach based on transformation to polar coordinates and Log-Gabor wavelets.
arXiv Detail & Related papers (2021-10-30T04:55:33Z) - Toward Accurate and Reliable Iris Segmentation Using Uncertainty
Learning [96.72850130126294]
We propose an Iris U-transformer (IrisUsformer) for accurate and reliable iris segmentation.
For better accuracy, we elaborately design IrisUsformer by adopting position-sensitive operation and re-packaging transformer block.
We show that IrisUsformer achieves better segmentation accuracy using 35% MACs of the SOTA IrisParseNet.
arXiv Detail & Related papers (2021-10-20T01:37:19Z) - Vision Transformers are Robust Learners [65.91359312429147]
We study the robustness of the Vision Transformer (ViT) against common corruptions and perturbations, distribution shifts, and natural adversarial examples.
We present analyses that provide both quantitative and qualitative indications to explain why ViTs are indeed more robust learners.
arXiv Detail & Related papers (2021-05-17T02:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.