DeepSign: Deep On-Line Signature Verification
- URL: http://arxiv.org/abs/2002.10119v3
- Date: Fri, 22 Jan 2021 15:53:57 GMT
- Title: DeepSign: Deep On-Line Signature Verification
- Authors: Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez and Javier
Ortega-Garcia
- Abstract summary: This study provides an in-depth analysis of state-of-the-art deep learning approaches for on-line signature verification.
We present and describe the new DeepSignDB on-line handwritten signature biometric public database.
We adapt and evaluate our recent deep learning approach named Time-Aligned Recurrent Neural Networks (TA-RNNs) for the task of on-line handwritten signature verification.
- Score: 6.5379404287240295
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has become a breathtaking technology in the last years,
overcoming traditional handcrafted approaches and even humans for many
different tasks. However, in some tasks, such as the verification of
handwritten signatures, the amount of publicly available data is scarce, what
makes difficult to test the real limits of deep learning. In addition to the
lack of public data, it is not easy to evaluate the improvements of novel
proposed approaches as different databases and experimental protocols are
usually considered.
The main contributions of this study are: i) we provide an in-depth analysis
of state-of-the-art deep learning approaches for on-line signature
verification, ii) we present and describe the new DeepSignDB on-line
handwritten signature biometric public database, iii) we propose a standard
experimental protocol and benchmark to be used for the research community in
order to perform a fair comparison of novel approaches with the state of the
art, and iv) we adapt and evaluate our recent deep learning approach named
Time-Aligned Recurrent Neural Networks (TA-RNNs) for the task of on-line
handwritten signature verification. This approach combines the potential of
Dynamic Time Warping and Recurrent Neural Networks to train more robust systems
against forgeries. Our proposed TA-RNN system outperforms the state of the art,
achieving results even below 2.0% EER when considering skilled forgery
impostors and just one training signature per user.
Related papers
- Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.
Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.
Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Self-Supervised Learning for Text Recognition: A Critical Survey [11.599791967838481]
Text Recognition (TR) refers to the research area that focuses on retrieving textual information from images.
Self-Supervised Learning (SSL) has gained attention by utilizing large datasets of unlabeled data to train Deep Neural Networks (DNN)
This paper seeks to consolidate the use of SSL in the field of TR, offering a critical and comprehensive overview of the current state of the art.
arXiv Detail & Related papers (2024-07-29T11:11:17Z) - Position: Quo Vadis, Unsupervised Time Series Anomaly Detection? [11.269007806012931]
The current state of machine learning scholarship in Timeseries Anomaly Detection (TAD) is plagued by the persistent use of flawed evaluation metrics.
Our paper presents a critical analysis of the status quo in TAD, revealing the misleading track of current research.
arXiv Detail & Related papers (2024-05-04T14:43:31Z) - Test-Time Training on Graphs with Large Language Models (LLMs) [68.375487369596]
Test-Time Training (TTT) has been proposed as a promising approach to train Graph Neural Networks (GNNs)
Inspired by the great annotation ability of Large Language Models (LLMs) on Text-Attributed Graphs (TAGs), we propose to enhance the test-time training on graphs with LLMs as annotators.
A two-stage training strategy is designed to tailor the test-time model with the limited and noisy labels.
arXiv Detail & Related papers (2024-04-21T08:20:02Z) - Leveraging Expert Models for Training Deep Neural Networks in Scarce
Data Domains: Application to Offline Handwritten Signature Verification [15.88604823470663]
The presented scheme is applied in offline handwritten signature verification (OffSV)
The proposed Student-Teacher (S-T) configuration utilizes feature-based knowledge distillation (FKD)
Remarkably, the models trained using this technique exhibit comparable, if not superior, performance to the teacher model across three popular signature datasets.
arXiv Detail & Related papers (2023-08-02T13:28:12Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - Deep Learning Architecture for Automatic Essay Scoring [0.0]
We propose a novel architecture based on recurrent networks (RNN) and convolution neural network (CNN)
In the proposed architecture, the multichannel convolutional layer learns and captures the contextual features of the word n-gram from the word embedding vectors.
Our proposed system achieves significantly higher grading accuracy than other deep learning-based AES systems.
arXiv Detail & Related papers (2022-06-16T14:56:24Z) - Self-supervised on Graphs: Contrastive, Generative,or Predictive [25.679620842010422]
Self-supervised learning (SSL) is emerging as a new paradigm for extracting informative knowledge through well-designed pretext tasks.
We divide existing graph SSL methods into three categories: contrastive, generative, and predictive.
We also summarize the commonly used datasets, evaluation metrics, downstream tasks, and open-source implementations of various algorithms.
arXiv Detail & Related papers (2021-05-16T03:30:03Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - COLAM: Co-Learning of Deep Neural Networks and Soft Labels via
Alternating Minimization [60.07531696857743]
Co-Learns DNNs and soft labels through Alternating Minimization of two objectives.
We propose COLAM framework that Co-Learns DNNs and soft labels through Alternating Minimization of two objectives.
arXiv Detail & Related papers (2020-04-26T17:50:20Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.