Vectorization and Rasterization: Self-Supervised Learning for Sketch and
Handwriting
- URL: http://arxiv.org/abs/2103.13716v1
- Date: Thu, 25 Mar 2021 09:47:18 GMT
- Title: Vectorization and Rasterization: Self-Supervised Learning for Sketch and
Handwriting
- Authors: Ayan Kumar Bhunia, Pinaki Nath Chowdhury, Yongxin Yang, Timothy M.
Hospedales, Tao Xiang, Yi-Zhe Song
- Abstract summary: We propose two novel cross-modal translation pre-text tasks for self-supervised feature learning: Vectorization and Rasterization.
Our learned encoder modules benefit both-based and vector-based downstream approaches to analysing hand-drawn data.
- Score: 168.91748514706995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning has gained prominence due to its efficacy at
learning powerful representations from unlabelled data that achieve excellent
performance on many challenging downstream tasks. However supervision-free
pre-text tasks are challenging to design and usually modality specific.
Although there is a rich literature of self-supervised methods for either
spatial (such as images) or temporal data (sound or text) modalities, a common
pre-text task that benefits both modalities is largely missing. In this paper,
we are interested in defining a self-supervised pre-text task for sketches and
handwriting data. This data is uniquely characterised by its existence in dual
modalities of rasterized images and vector coordinate sequences. We address and
exploit this dual representation by proposing two novel cross-modal translation
pre-text tasks for self-supervised feature learning: Vectorization and
Rasterization. Vectorization learns to map image space to vector coordinates
and rasterization maps vector coordinates to image space. We show that the our
learned encoder modules benefit both raster-based and vector-based downstream
approaches to analysing hand-drawn data. Empirical evidence shows that our
novel pre-text tasks surpass existing single and multi-modal self-supervision
methods.
Related papers
- You'll Never Walk Alone: A Sketch and Text Duet for Fine-Grained Image Retrieval [120.49126407479717]
We introduce a novel compositionality framework, effectively combining sketches and text using pre-trained CLIP models.
Our system extends to novel applications in composed image retrieval, domain transfer, and fine-grained generation.
arXiv Detail & Related papers (2024-03-12T00:27:18Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Single-Stream Multi-Level Alignment for Vision-Language Pretraining [103.09776737512078]
We propose a single stream model that aligns the modalities at multiple levels.
We achieve this using two novel tasks: symmetric cross-modality reconstruction and a pseudo-labeled key word prediction.
We demonstrate top performance on a set of Vision-Language downstream tasks such as zero-shot/fine-tuned image/text retrieval, referring expression, and VQA.
arXiv Detail & Related papers (2022-03-27T21:16:10Z) - SURDS: Self-Supervised Attention-guided Reconstruction and Dual Triplet
Loss for Writer Independent Offline Signature Verification [16.499360910037904]
Offline Signature Verification (OSV) is a fundamental biometric task across various forensic, commercial and legal applications.
We propose a two-stage deep learning framework that leverages self-supervised representation learning as well as metric learning for writer-independent OSV.
The proposed framework has been evaluated on two publicly available offline signature datasets and compared with various state-of-the-art methods.
arXiv Detail & Related papers (2022-01-25T07:26:55Z) - Self-Supervised Image-to-Text and Text-to-Image Synthesis [23.587581181330123]
We propose a novel self-supervised deep learning based approach towards learning the cross-modal embedding spaces.
In our approach, we first obtain dense vector representations of images using StackGAN-based autoencoder model and also dense vector representations on sentence-level utilizing LSTM based text-autoencoder.
arXiv Detail & Related papers (2021-12-09T13:54:56Z) - LAViTeR: Learning Aligned Visual and Textual Representations Assisted by Image and Caption Generation [5.064384692591668]
This paper proposes LAViTeR, a novel architecture for visual and textual representation learning.
The main module, Visual Textual Alignment (VTA) will be assisted by two auxiliary tasks, GAN-based image synthesis and Image Captioning.
The experimental results on two public datasets, CUB and MS-COCO, demonstrate superior visual and textual representation alignment.
arXiv Detail & Related papers (2021-09-04T22:48:46Z) - Primitive Representation Learning for Scene Text Recognition [7.818765015637802]
We propose a primitive representation learning method that aims to exploit intrinsic representations of scene text images.
A Primitive REpresentation learning Network (PREN) is constructed to use the visual text representations for parallel decoding.
We also propose a framework called PREN2D to alleviate the misalignment problem in attention-based methods.
arXiv Detail & Related papers (2021-05-10T11:54:49Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Which way? Direction-Aware Attributed Graph Embedding [2.429993132301275]
Graph embedding algorithms are used to efficiently represent a graph in a continuous vector space.
One aspect that is often overlooked is whether the graph is directed or not.
This study presents a novel text-enriched, direction-aware algorithm called DIAGRAM.
arXiv Detail & Related papers (2020-01-30T13:08:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.