SWIS: Self-Supervised Representation Learning For Writer Independent
Offline Signature Verification
- URL: http://arxiv.org/abs/2202.13078v1
- Date: Sat, 26 Feb 2022 06:33:25 GMT
- Title: SWIS: Self-Supervised Representation Learning For Writer Independent
Offline Signature Verification
- Authors: Siladittya Manna, Soumitri Chattopadhyay, Saumik Bhattacharya and
Umapada Pal
- Abstract summary: Writer independent offline signature verification is one of the most challenging tasks in pattern recognition.
We propose a novel self-supervised learning framework for writer independent offline signature verification.
- Score: 16.499360910037904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Writer independent offline signature verification is one of the most
challenging tasks in pattern recognition as there is often a scarcity of
training data. To handle such data scarcity problem, in this paper, we propose
a novel self-supervised learning (SSL) framework for writer independent offline
signature verification. To our knowledge, this is the first attempt to utilize
self-supervised setting for the signature verification task. The objective of
self-supervised representation learning from the signature images is achieved
by minimizing the cross-covariance between two random variables belonging to
different feature directions and ensuring a positive cross-covariance between
the random variables denoting the same feature direction. This ensures that the
features are decorrelated linearly and the redundant information is discarded.
Through experimental results on different data sets, we obtained encouraging
results.
Related papers
- Offline Signature Verification Based on Feature Disentangling Aided Variational Autoencoder [6.128256936054622]
Main tasks of signature verification systems include extracting features from signature images and training a classifier for classification.
The instances of skilled forgeries are often unavailable, when signature verification models are being trained.
This paper proposes a new signature verification method using a variational autoencoder (VAE) to extract features directly from signature images.
arXiv Detail & Related papers (2024-09-29T19:54:47Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - CSSL-RHA: Contrastive Self-Supervised Learning for Robust Handwriting
Authentication [23.565017967901618]
We propose a novel Contrastive Self-Supervised Learning framework for Robust Handwriting Authentication.
It can dynamically learn complex yet important features and accurately predict writer identities.
Our proposed model can still effectively achieve authentication even under abnormal circumstances, such as data falsification and corruption.
arXiv Detail & Related papers (2023-07-18T02:20:46Z) - Label Matching Semi-Supervised Object Detection [85.99282969977541]
Semi-supervised object detection has made significant progress with the development of mean teacher driven self-training.
Label mismatch problem is not yet fully explored in the previous works, leading to severe confirmation bias during self-training.
We propose a simple yet effective LabelMatch framework from two different yet complementary perspectives.
arXiv Detail & Related papers (2022-06-14T05:59:41Z) - SURDS: Self-Supervised Attention-guided Reconstruction and Dual Triplet
Loss for Writer Independent Offline Signature Verification [16.499360910037904]
Offline Signature Verification (OSV) is a fundamental biometric task across various forensic, commercial and legal applications.
We propose a two-stage deep learning framework that leverages self-supervised representation learning as well as metric learning for writer-independent OSV.
The proposed framework has been evaluated on two publicly available offline signature datasets and compared with various state-of-the-art methods.
arXiv Detail & Related papers (2022-01-25T07:26:55Z) - Disjoint Contrastive Regression Learning for Multi-Sourced Annotations [10.159313152511919]
Large-scale datasets are important for the development of deep learning models.
Multiple annotators may be employed to label different subsets of the data.
The inconsistency and bias among different annotators are harmful to the model training.
arXiv Detail & Related papers (2021-12-31T12:39:04Z) - Unsupervised Noisy Tracklet Person Re-identification [100.85530419892333]
We present a novel selective tracklet learning (STL) approach that can train discriminative person re-id models from unlabelled tracklet data.
This avoids the tedious and costly process of exhaustively labelling person image/tracklet true matching pairs across camera views.
Our method is particularly more robust against arbitrary noisy data of raw tracklets therefore scalable to learning discriminative models from unconstrained tracking data.
arXiv Detail & Related papers (2021-01-16T07:31:00Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.