Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification
- URL: http://arxiv.org/abs/2410.01441v1
- Date: Wed, 2 Oct 2024 11:43:58 GMT
- Title: Decorrelation-based Self-Supervised Visual Representation Learning for Writer Identification
- Authors: Arkadip Maitra, Shree Mitra, Siladittya Manna, Saumik Bhattacharya, Umapada Pal,
- Abstract summary: We explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification.
We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark.
To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.
- Score: 10.55096104577668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning has developed rapidly over the last decade and has been applied in many areas of computer vision. Decorrelation-based self-supervised pretraining has shown great promise among non-contrastive algorithms, yielding performance at par with supervised and contrastive self-supervised baselines. In this work, we explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification. Here we propose a modified formulation of the decorrelation-based framework named SWIS which was proposed for signature verification by standardizing the features along each dimension on top of the existing framework. We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark and also outperforms several supervised methods as well. To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Self-Distilled Representation Learning for Time Series [45.51976109748732]
Self-supervised learning for time-series data holds potential similar to that recently unleashed in Natural Language Processing and Computer Vision.
We propose a conceptually simple yet powerful non-contrastive approach, based on the data2vec self-distillation framework.
We demonstrate the competitiveness of our approach for classification and forecasting as downstream tasks, comparing with state-of-the-art self-supervised learning methods on the UCR and UEA archives as well as the ETT and Electricity datasets.
arXiv Detail & Related papers (2023-11-19T14:34:01Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - GEDI: GEnerative and DIscriminative Training for Self-Supervised
Learning [3.6804038214708563]
We study state-of-the-art self-supervised learning objectives and propose a unified formulation based on likelihood learning.
We refer to this combined framework as GEDI, which stands for GEnerative and DIscriminative training.
We show that GEDI outperforms existing self-supervised learning strategies in terms of clustering performance by a wide margin.
arXiv Detail & Related papers (2022-12-27T09:33:50Z) - Self-Taught Metric Learning without Labels [47.832107446521626]
We present a novel self-taught framework for unsupervised metric learning.
It alternates between predicting class-equivalence relations between data through a moving average of an embedding model and learning the model with the predicted relations as pseudo labels.
arXiv Detail & Related papers (2022-05-04T05:48:40Z) - Leveraging Hidden Structure in Self-Supervised Learning [2.385916960125935]
We propose a principled framework based on a mutual information objective, which integrates self-supervised and structure learning.
Preliminary experiments on CIFAR-10 show that the proposed framework achieves higher generalization performance in downstream classification tasks.
arXiv Detail & Related papers (2021-06-30T13:35:36Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Revisiting Contrastive Learning for Few-Shot Classification [74.78397993160583]
Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations.
We show how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks.
We propose a novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
arXiv Detail & Related papers (2021-01-26T19:58:08Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.