Unsupervised Pre-trained, Texture Aware And Lightweight Model for Deep
Learning-Based Iris Recognition Under Limited Annotated Data
- URL: http://arxiv.org/abs/2002.09048v1
- Date: Thu, 20 Feb 2020 22:30:38 GMT
- Title: Unsupervised Pre-trained, Texture Aware And Lightweight Model for Deep
Learning-Based Iris Recognition Under Limited Annotated Data
- Authors: Manashi Chakraborty, Mayukh Roy, Prabir Kumar Biswas, Pabitra Mitra
- Abstract summary: We present a texture aware lightweight deep learning framework for iris recognition.
To address the dearth of labelled iris data, we propose a reconstruction loss guided unsupervised pre-training stage.
Next, we propose several texture aware improvisations inside a Convolution Neural Net to better leverage iris textures.
- Score: 17.243339961137643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a texture aware lightweight deep learning framework
for iris recognition. Our contributions are primarily three fold. Firstly, to
address the dearth of labelled iris data, we propose a reconstruction loss
guided unsupervised pre-training stage followed by supervised refinement. This
drives the network weights to focus on discriminative iris texture patterns.
Next, we propose several texture aware improvisations inside a Convolution
Neural Net to better leverage iris textures. Finally, we show that our
systematic training and architectural choices enable us to design an efficient
framework with upto 100X fewer parameters than contemporary deep learning
baselines yet achieve better recognition performance for within and cross
dataset evaluations.
Related papers
- Neural Network Pruning by Gradient Descent [7.427858344638741]
We introduce a novel and straightforward neural network pruning framework that incorporates the Gumbel-Softmax technique.
We demonstrate its exceptional compression capability, maintaining high accuracy on the MNIST dataset with only 0.15% of the original network parameters.
We believe our method opens a promising new avenue for deep learning pruning and the creation of interpretable machine learning systems.
arXiv Detail & Related papers (2023-11-21T11:12:03Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - Super-Resolution and Image Re-projection for Iris Recognition [67.42500312968455]
Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images.
In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment.
Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems.
arXiv Detail & Related papers (2022-10-20T09:46:23Z) - Texture Aware Autoencoder Pre-training And Pairwise Learning Refinement
For Improved Iris Recognition [16.383084641568693]
This paper presents an end-to-end trainable iris recognition system for datasets with limited training data.
We build upon our previous stagewise learning framework with certain key optimization and architectural innovations.
We validate our model across three publicly available iris datasets and the proposed model consistently outperforms both traditional and deep learning baselines.
arXiv Detail & Related papers (2022-02-15T15:12:31Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Incremental Learning via Rate Reduction [26.323357617265163]
Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes.
We propose utilizing an alternative "white box" architecture derived from the principle of rate reduction, where each layer of the network is explicitly computed without back propagation.
Under this paradigm, we demonstrate that, given a pre-trained network and new data classes, our approach can provably construct a new network that emulates joint training with all past and new classes.
arXiv Detail & Related papers (2020-11-30T07:23:55Z) - Learning Visual Representations for Transfer Learning by Suppressing
Texture [38.901410057407766]
In self-supervised learning, texture as a low-level cue may provide shortcuts that prevent the network from learning higher level representations.
We propose to use classic methods based on anisotropic diffusion to augment training using images with suppressed texture.
We empirically show that our method achieves state-of-the-art results on object detection and image classification.
arXiv Detail & Related papers (2020-11-03T18:27:03Z) - Semantically-Guided Representation Learning for Self-Supervised
Monocular Depth [40.49380547487908]
We propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning.
Our method improves upon the state of the art for self-supervised monocular depth prediction over all pixels, fine-grained details, and per semantic categories.
arXiv Detail & Related papers (2020-02-27T18:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.