Texture Aware Autoencoder Pre-training And Pairwise Learning Refinement
For Improved Iris Recognition
- URL: http://arxiv.org/abs/2202.07499v1
- Date: Tue, 15 Feb 2022 15:12:31 GMT
- Title: Texture Aware Autoencoder Pre-training And Pairwise Learning Refinement
For Improved Iris Recognition
- Authors: Manashi Chakraborty, Aritri Chakraborty, Prabir Kumar Biswas, Pabitra
Mitra
- Abstract summary: This paper presents an end-to-end trainable iris recognition system for datasets with limited training data.
We build upon our previous stagewise learning framework with certain key optimization and architectural innovations.
We validate our model across three publicly available iris datasets and the proposed model consistently outperforms both traditional and deep learning baselines.
- Score: 16.383084641568693
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a texture aware end-to-end trainable iris recognition
system, specifically designed for datasets like iris having limited training
data. We build upon our previous stagewise learning framework with certain key
optimization and architectural innovations. First, we pretrain a Stage-1
encoder network with an unsupervised autoencoder learning optimized with an
additional data relation loss on top of usual reconstruction loss. The data
relation loss enables learning better texture representation which is pivotal
for a texture rich dataset such as iris. Robustness of Stage-1 feature
representation is further enhanced with an auxiliary denoising task. Such
pre-training proves beneficial for effectively training deep networks on data
constrained iris datasets. Next, in Stage-2 supervised refinement, we design a
pairwise learning architecture for an end-to-end trainable iris recognition
system. The pairwise learning includes the task of iris matching inside the
training pipeline itself and results in significant improvement in recognition
performance compared to usual offline matching. We validate our model across
three publicly available iris datasets and the proposed model consistently
outperforms both traditional and deep learning baselines for both
Within-Dataset and Cross-Dataset configurations
Related papers
- Rethinking the Key Factors for the Generalization of Remote Sensing Stereo Matching Networks [15.456986824737067]
Stereo matching task relies on expensive airborne LiDAR data.
In this paper, we study key training factors from three perspectives.
We present an unsupervised stereo matching network with good generalization performance.
arXiv Detail & Related papers (2024-08-14T15:26:10Z) - Koopcon: A new approach towards smarter and less complex learning [13.053285552524052]
In the era of big data, the sheer volume and complexity of datasets pose significant challenges in machine learning.
This paper introduces an innovative Autoencoder-based dataset condensation model backed by Koopman operator theory.
Inspired by the predictive coding mechanisms of the human brain, our model leverages a novel approach to encode and reconstruct data.
arXiv Detail & Related papers (2024-05-22T17:47:14Z) - Bilevel Fast Scene Adaptation for Low-Light Image Enhancement [50.639332885989255]
Enhancing images in low-light scenes is a challenging but widely concerned task in the computer vision.
Main obstacle lies in the modeling conundrum from distribution discrepancy across different scenes.
We introduce the bilevel paradigm to model the above latent correspondence.
A bilevel learning framework is constructed to endow the scene-irrelevant generality of the encoder towards diverse scenes.
arXiv Detail & Related papers (2023-06-02T08:16:21Z) - Class Anchor Margin Loss for Content-Based Image Retrieval [97.81742911657497]
We propose a novel repeller-attractor loss that falls in the metric learning paradigm, yet directly optimize for the L2 metric without the need of generating pairs.
We evaluate the proposed objective in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures.
arXiv Detail & Related papers (2023-06-01T12:53:10Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification [25.84902508816679]
We introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously.
This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider.
Our method achieves state-of-the-art performance on three commonly-used datasets.
arXiv Detail & Related papers (2022-11-24T05:44:26Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Dataset Condensation with Gradient Matching [36.14340188365505]
We propose a training set synthesis technique for data-efficient learning, called dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch.
We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2020-06-10T16:30:52Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Unsupervised Pre-trained, Texture Aware And Lightweight Model for Deep
Learning-Based Iris Recognition Under Limited Annotated Data [17.243339961137643]
We present a texture aware lightweight deep learning framework for iris recognition.
To address the dearth of labelled iris data, we propose a reconstruction loss guided unsupervised pre-training stage.
Next, we propose several texture aware improvisations inside a Convolution Neural Net to better leverage iris textures.
arXiv Detail & Related papers (2020-02-20T22:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.