AILearn: An Adaptive Incremental Learning Model for Spoof Fingerprint
Detection
- URL: http://arxiv.org/abs/2012.14639v1
- Date: Tue, 29 Dec 2020 07:26:37 GMT
- Title: AILearn: An Adaptive Incremental Learning Model for Spoof Fingerprint
Detection
- Authors: Shivang Agarwal, Ajita Rattani, C. Ravindranath Chowdary
- Abstract summary: Incremental learning enables the learner to accommodate new knowledge without retraining the existing model.
We propose AILearn, a generic model for incremental learning which overcomes the stability-plasticity dilemma.
We demonstrate the efficacy of the proposed AILearn model on spoof fingerprint detection application.
- Score: 12.676356746752893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental learning enables the learner to accommodate new knowledge without
retraining the existing model. It is a challenging task which requires learning
from new data as well as preserving the knowledge extracted from the previously
accessed data. This challenge is known as the stability-plasticity dilemma. We
propose AILearn, a generic model for incremental learning which overcomes the
stability-plasticity dilemma by carefully integrating the ensemble of base
classifiers trained on new data with the current ensemble without retraining
the model from scratch using entire data. We demonstrate the efficacy of the
proposed AILearn model on spoof fingerprint detection application. One of the
significant challenges associated with spoof fingerprint detection is the
performance drop on spoofs generated using new fabrication materials. AILearn
is an adaptive incremental learning model which adapts to the features of the
``live'' and ``spoof'' fingerprint images and efficiently recognizes the new
spoof fingerprints as well as the known spoof fingerprints when the new data is
available. To the best of our knowledge, AILearn is the first attempt in
incremental learning algorithms that adapts to the properties of data for
generating a diverse ensemble of base classifiers. From the experiments
conducted on standard high-dimensional datasets LivDet 2011, LivDet 2013 and
LivDet 2015, we show that the performance gain on new fake materials is
significantly high. On an average, we achieve $49.57\%$ improvement in accuracy
between the consecutive learning phases.
Related papers
- Data Augmentation for Sparse Multidimensional Learning Performance Data Using Generative AI [17.242331892899543]
Learning performance data describe correct and incorrect answers or problem-solving attempts in adaptive learning.
Learning performance data tend to be highly sparse (80%(sim)90% missing observations) in most real-world applications due to adaptive item selection.
This article proposes a systematic framework for augmenting learner data to address data sparsity in learning performance data.
arXiv Detail & Related papers (2024-09-24T00:25:07Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Towards Plastic and Stable Exemplar-Free Incremental Learning: A Dual-Learner Framework with Cumulative Parameter Averaging [12.168402195820649]
We propose a Dual-Learner framework with Cumulative.
Averaging (DLCPA)
We show that DLCPA outperforms several state-of-the-art exemplar-free baselines in both Task-IL and Class-IL settings.
arXiv Detail & Related papers (2023-10-28T08:48:44Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - SRIL: Selective Regularization for Class-Incremental Learning [5.810252620242912]
Class-Incremental Learning aims to create an integrated model that balances plasticity and stability to overcome this challenge.
We propose a selective regularization method that accepts new knowledge while maintaining previous knowledge.
We validate the effectiveness of the proposed method through extensive experimental protocols using CIFAR-100, ImageNet-Subset, and ImageNet-Full.
arXiv Detail & Related papers (2023-05-09T05:04:35Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - EaZy Learning: An Adaptive Variant of Ensemble Learning for Fingerprint
Liveness Detection [14.99677459192122]
Fingerprint liveness detection mechanisms perform well under the within-dataset environment but fail miserably under cross-sensor and cross-dataset settings.
To enhance the generalization abilities, robustness and the interoperability of the fingerprint spoof detectors, the learning models need to be adaptive towards the data.
We propose a generic model, EaZy learning which can be considered as an adaptive midway between eager and lazy learning.
arXiv Detail & Related papers (2021-03-03T06:40:19Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.