Self-supervised Character-to-Character Distillation for Text Recognition
- URL: http://arxiv.org/abs/2211.00288v4
- Date: Fri, 18 Aug 2023 14:34:03 GMT
- Title: Self-supervised Character-to-Character Distillation for Text Recognition
- Authors: Tongkun Guan, Wei Shen, Xue Yang, Qi Feng, Zekun Jiang, Xiaokang Yang
- Abstract summary: We propose a novel self-supervised Character-to-Character Distillation method, CCD, which enables versatile augmentations to facilitate text representation learning.
CCD achieves state-of-the-art results, with average performance gains of 1.38% in text recognition, 1.7% in text segmentation, 0.24 dB (PSNR) and 0.0321 (SSIM) in text super-resolution.
- Score: 54.12490492265583
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When handling complicated text images (e.g., irregular structures, low
resolution, heavy occlusion, and uneven illumination), existing supervised text
recognition methods are data-hungry. Although these methods employ large-scale
synthetic text images to reduce the dependence on annotated real images, the
domain gap still limits the recognition performance. Therefore, exploring the
robust text feature representations on unlabeled real images by self-supervised
learning is a good solution. However, existing self-supervised text recognition
methods conduct sequence-to-sequence representation learning by roughly
splitting the visual features along the horizontal axis, which limits the
flexibility of the augmentations, as large geometric-based augmentations may
lead to sequence-to-sequence feature inconsistency. Motivated by this, we
propose a novel self-supervised Character-to-Character Distillation method,
CCD, which enables versatile augmentations to facilitate general text
representation learning. Specifically, we delineate the character structures of
unlabeled real images by designing a self-supervised character segmentation
module. Following this, CCD easily enriches the diversity of local characters
while keeping their pairwise alignment under flexible augmentations, using the
transformation matrix between two augmented views from images. Experiments
demonstrate that CCD achieves state-of-the-art results, with average
performance gains of 1.38% in text recognition, 1.7% in text segmentation, 0.24
dB (PSNR) and 0.0321 (SSIM) in text super-resolution. Code is available at
https://github.com/TongkunGuan/CCD.
Related papers
- Decoder Pre-Training with only Text for Scene Text Recognition [54.93037783663204]
Scene text recognition (STR) pre-training methods have achieved remarkable progress, primarily relying on synthetic datasets.
We introduce a novel method named Decoder Pre-training with only text for STR (DPTR)
DPTR treats text embeddings produced by the CLIP text encoder as pseudo visual embeddings and uses them to pre-train the decoder.
arXiv Detail & Related papers (2024-08-11T06:36:42Z) - Scene Text Image Super-Resolution via Content Perceptual Loss and
Criss-Cross Transformer Blocks [48.81850740907517]
We present TATSR, a Text-Aware Text Super-Resolution framework.
It effectively learns the unique text characteristics using Criss-Cross Transformer Blocks (CCTBs) and a novel Content Perceptual (CP) Loss.
It outperforms state-of-the-art methods in terms of both recognition accuracy and human perception.
arXiv Detail & Related papers (2022-10-13T11:48:45Z) - Reading and Writing: Discriminative and Generative Modeling for
Self-Supervised Text Recognition [101.60244147302197]
We introduce contrastive learning and masked image modeling to learn discrimination and generation of text images.
Our method outperforms previous self-supervised text recognition methods by 10.2%-20.2% on irregular scene text recognition datasets.
Our proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3% on 11 benchmarks, with similar model size.
arXiv Detail & Related papers (2022-07-01T03:50:26Z) - Language Matters: A Weakly Supervised Pre-training Approach for Scene
Text Detection and Spotting [69.77701325270047]
This paper presents a weakly supervised pre-training method that can acquire effective scene text representations.
Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features.
Experiments show that our pre-trained model improves F-score by +2.5% and +4.8% while transferring its weights to other text detection and spotting networks.
arXiv Detail & Related papers (2022-03-08T08:10:45Z) - CRIS: CLIP-Driven Referring Image Segmentation [71.56466057776086]
We propose an end-to-end CLIP-Driven Referring Image framework (CRIS)
CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment.
Our proposed framework significantly outperforms the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-11-30T07:29:08Z) - Implicit Feature Alignment: Learn to Convert Text Recognizer to Text
Spotter [38.4211220941874]
We propose a simple, elegant and effective paradigm called Implicit Feature Alignment (IFA)
IFA can be easily integrated into current text recognizers, resulting in a novel inference mechanism called IFAinference.
We experimentally demonstrate that IFA achieves state-of-the-art performance on end-to-end document recognition tasks.
arXiv Detail & Related papers (2021-06-10T17:06:28Z) - Sequence-to-Sequence Contrastive Learning for Text Recognition [29.576864819760498]
We propose a framework for sequence-to-sequence contrastive learning (SeqCLR) of visual representations.
Experiments on handwritten text and on scene text show that when a text decoder is trained on the learned representations, our method outperforms non-sequential contrastive methods.
arXiv Detail & Related papers (2020-12-20T09:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.