A white-box analysis on the writer-independent dichotomy transformation
applied to offline handwritten signature verification
- URL: http://arxiv.org/abs/2004.03370v2
- Date: Tue, 14 Apr 2020 17:51:28 GMT
- Title: A white-box analysis on the writer-independent dichotomy transformation
applied to offline handwritten signature verification
- Authors: Victor L. F. Souza, Adriano L. I. Oliveira, Rafael M. O. Cruz, Robert
Sabourin
- Abstract summary: A writer-independent (WI) framework is used to train a single model to perform signature verification for all writers.
In WI systems, a single model is trained to perform signature verification for all writers from a dissimilarity space generated by the dichotomy transformation.
We present a white-box analysis of this approach highlighting how it handles the challenges, the dynamic selection of references through fusion function, and its application for transfer learning.
- Score: 13.751795751395091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High number of writers, small number of training samples per writer with high
intra-class variability and heavily imbalanced class distributions are among
the challenges and difficulties of the offline Handwritten Signature
Verification (HSV) problem. A good alternative to tackle these issues is to use
a writer-independent (WI) framework. In WI systems, a single model is trained
to perform signature verification for all writers from a dissimilarity space
generated by the dichotomy transformation. Among the advantages of this
framework is its scalability to deal with some of these challenges and its ease
in managing new writers, and hence of being used in a transfer learning
context. In this work, we present a white-box analysis of this approach
highlighting how it handles the challenges, the dynamic selection of references
through fusion function, and its application for transfer learning. All the
analyses are carried out at the instance level using the instance hardness (IH)
measure. The experimental results show that, using the IH analysis, we were
able to characterize "good" and "bad" quality skilled forgeries as well as the
frontier region between positive and negative samples. This enables futures
investigations on methods for improving discrimination between genuine
signatures and skilled forgeries by considering these characterizations.
Related papers
- Paired Completion: Flexible Quantification of Issue-framing at Scale with LLMs [0.41436032949434404]
We develop and rigorously evaluate new detection methods for issue framing and narrative analysis within large text datasets.
We show that issue framing can be reliably and efficiently detected in large corpora with only a few examples of either perspective on a given issue.
arXiv Detail & Related papers (2024-08-19T07:14:15Z) - TRAD: Enhancing LLM Agents with Step-Wise Thought Retrieval and Aligned
Decision [32.24857534147114]
Large language model (LLM) agents have been built for different tasks like web navigation and online shopping.
In this paper, we propose a novel framework (TRAD) to address these issues.
TRAD conducts Thought Retrieval, achieving step-level demonstration selection via thought matching.
Then, TRAD introduces Aligned Decision, complementing retrieved demonstration steps with their previous or subsequent steps.
arXiv Detail & Related papers (2024-03-10T13:58:38Z) - Exploring Precision and Recall to assess the quality and diversity of LLMs [82.21278402856079]
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textscLlama-2 and textscMistral.
This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora.
arXiv Detail & Related papers (2024-02-16T13:53:26Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Multiscale Feature Learning Using Co-Tuplet Loss for Offline Handwritten Signature Verification [0.0]
We introduce the MultiScale Signature feature learning Network (MS-SigNet) with the co-tuplet loss.
MS-SigNet learns both global and regional signature features from multiple spatial scales, enhancing feature discrimination.
We also present HanSig, a large-scale Chinese signature dataset to support robust system development for this language.
arXiv Detail & Related papers (2023-08-01T10:14:43Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - In and Out-of-Domain Text Adversarial Robustness via Label Smoothing [64.66809713499576]
We study the adversarial robustness provided by various label smoothing strategies in foundational models for diverse NLP tasks.
Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks.
We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.
arXiv Detail & Related papers (2022-12-20T14:06:50Z) - TraSE: Towards Tackling Authorial Style from a Cognitive Science
Perspective [4.123763595394021]
Authorship attribution experiments with over 27,000 authors and 1.4 million samples in a cross-domain scenario resulted in 90% attribution accuracy.
A qualitative analysis is performed on TraSE using physical human characteristics, like age, to validate its claim on capturing cognitive traits.
arXiv Detail & Related papers (2022-06-21T19:55:07Z) - Text Recognition in Real Scenarios with a Few Labeled Samples [55.07859517380136]
Scene text recognition (STR) is still a hot research topic in computer vision field.
This paper proposes a few-shot adversarial sequence domain adaptation (FASDA) approach to build sequence adaptation.
Our approach can maximize the character-level confusion between the source domain and the target domain.
arXiv Detail & Related papers (2020-06-22T13:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.