Text is Text, No Matter What: Unifying Text Recognition using Knowledge
Distillation
- URL: http://arxiv.org/abs/2107.12087v2
- Date: Tue, 27 Jul 2021 23:06:56 GMT
- Title: Text is Text, No Matter What: Unifying Text Recognition using Knowledge
Distillation
- Authors: Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Yi-Zhe Song
- Abstract summary: We argue for their unification -- we aim for a single model that can compete favourably with two separate state-of-the-art STR and HTR models.
We first show that cross-utilisation of STR and HTR models trigger significant performance drops due to differences in their inherent challenges.
We then tackle their union by introducing a knowledge distillation (KD) based framework.
- Score: 41.43280922432707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text recognition remains a fundamental and extensively researched topic in
computer vision, largely owing to its wide array of commercial applications.
The challenging nature of the very problem however dictated a fragmentation of
research efforts: Scene Text Recognition (STR) that deals with text in everyday
scenes, and Handwriting Text Recognition (HTR) that tackles hand-written text.
In this paper, for the first time, we argue for their unification -- we aim for
a single model that can compete favourably with two separate state-of-the-art
STR and HTR models. We first show that cross-utilisation of STR and HTR models
trigger significant performance drops due to differences in their inherent
challenges. We then tackle their union by introducing a knowledge distillation
(KD) based framework. This is however non-trivial, largely due to the
variable-length and sequential nature of text sequences, which renders
off-the-shelf KD techniques that mostly works with global fixed-length data
inadequate. For that, we propose three distillation losses all of which are
specifically designed to cope with the aforementioned unique characteristics of
text recognition. Empirical evidence suggests that our proposed unified model
performs on par with individual models, even surpassing them in certain cases.
Ablative studies demonstrate that naive baselines such as a two-stage
framework, and domain adaption/generalisation alternatives do not work as well,
further verifying the appropriateness of our design.
Related papers
- MOoSE: Multi-Orientation Sharing Experts for Open-set Scene Text Recognition [3.6227230205444902]
Open-set text recognition aims to address both novel characters and previously seen ones.
We first propose a Multi-Oriented Open-Set Text Recognition task (MOOSTR) to model the challenges of both novel characters and writing direction variety.
We then propose a Multi-Orientation Sharing Experts (MOoSE) framework as a strong baseline solution.
arXiv Detail & Related papers (2024-07-26T09:20:29Z) - Towards Unified Multi-granularity Text Detection with Interactive Attention [56.79437272168507]
"Detect Any Text" is an advanced paradigm that unifies scene text detection, layout analysis, and document page detection into a cohesive, end-to-end model.
A pivotal innovation in DAT is the across-granularity interactive attention module, which significantly enhances the representation learning of text instances.
Tests demonstrate that DAT achieves state-of-the-art performances across a variety of text-related benchmarks.
arXiv Detail & Related papers (2024-05-30T07:25:23Z) - Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text [61.22649031769564]
We propose a novel framework, paraphrased text span detection (PTD)
PTD aims to identify paraphrased text spans within a text.
We construct a dedicated dataset, PASTED, for paraphrased text span detection.
arXiv Detail & Related papers (2024-05-21T11:22:27Z) - Relational Contrastive Learning for Scene Text Recognition [22.131554868199782]
We argue that prior contextual information can be interpreted as relations of textual primitives due to the heterogeneous text and background.
We propose to enrich the textual relations via rearrangement, hierarchy and interaction, and design a unified framework called RCLSTR: Contrastive Learning for Scene Text Recognition.
arXiv Detail & Related papers (2023-08-01T12:46:58Z) - Text-guided Image Restoration and Semantic Enhancement for Text-to-Image Person Retrieval [11.798006331912056]
The goal of Text-to-Image Person Retrieval (TIPR) is to retrieve specific person images according to the given textual descriptions.
We propose a novel TIPR framework to build fine-grained interactions and alignment between person images and the corresponding texts.
arXiv Detail & Related papers (2023-07-18T08:23:46Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Toward Understanding WordArt: Corner-Guided Transformer for Scene Text
Recognition [63.6608759501803]
We propose to recognize artistic text at three levels.
corner points are applied to guide the extraction of local features inside characters, considering the robustness of corner structures to appearance and shape.
Secondly, we design a character contrastive loss to model the character-level feature, improving the feature representation for character classification.
Thirdly, we utilize Transformer to learn the global feature on image-level and model the global relationship of the corner points.
arXiv Detail & Related papers (2022-07-31T14:11:05Z) - Text-DIAE: Degradation Invariant Autoencoders for Text Recognition and
Document Enhancement [8.428866479825736]
Text-DIAE aims to solve two tasks, text recognition (handwritten or scene-text) and document image enhancement.
We define three pretext tasks as learning objectives to be optimized during pre-training without the usage of labelled data.
Our method surpasses the state-of-the-art significantly in existing supervised and self-supervised settings.
arXiv Detail & Related papers (2022-03-09T15:44:36Z) - Continuous Offline Handwriting Recognition using Deep Learning Models [0.0]
Handwritten text recognition is an open problem of great interest in the area of automatic document image analysis.
We have proposed a new recognition model based on integrating two types of deep learning architectures: convolutional neural networks (CNN) and sequence-to-sequence (seq2seq)
The new proposed model provides competitive results with those obtained with other well-established methodologies.
arXiv Detail & Related papers (2021-12-26T07:31:03Z) - Text Recognition in Real Scenarios with a Few Labeled Samples [55.07859517380136]
Scene text recognition (STR) is still a hot research topic in computer vision field.
This paper proposes a few-shot adversarial sequence domain adaptation (FASDA) approach to build sequence adaptation.
Our approach can maximize the character-level confusion between the source domain and the target domain.
arXiv Detail & Related papers (2020-06-22T13:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.