MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining
- URL: http://arxiv.org/abs/2206.00311v3
- Date: Tue, 10 Oct 2023 03:06:45 GMT
- Title: MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining
- Authors: Pengyuan Lyu, Chengquan Zhang, Shanshan Liu, Meina Qiao, Yangliu Xu,
Liang Wu, Kun Yao, Junyu Han, Errui Ding, Jingdong Wang
- Abstract summary: We propose a novel approach MaskOCR to unify vision and language pre-training in the classical encoder-decoder recognition framework.
We adopt the masked image modeling approach to pre-train the feature encoder using a large set of unlabeled real text images.
We transform text data into synthesized text images to unify the data modalities of vision and language, and enhance the language modeling capability of the sequence decoder.
- Score: 68.05105411320842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text images contain both visual and linguistic information. However, existing
pre-training techniques for text recognition mainly focus on either visual
representation learning or linguistic knowledge learning. In this paper, we
propose a novel approach MaskOCR to unify vision and language pre-training in
the classical encoder-decoder recognition framework. We adopt the masked image
modeling approach to pre-train the feature encoder using a large set of
unlabeled real text images, which allows us to learn strong visual
representations. In contrast to introducing linguistic knowledge with an
additional language model, we directly pre-train the sequence decoder.
Specifically, we transform text data into synthesized text images to unify the
data modalities of vision and language, and enhance the language modeling
capability of the sequence decoder using a proposed masked image-language
modeling scheme. Significantly, the encoder is frozen during the pre-training
phase of the sequence decoder. Experimental results demonstrate that our
proposed method achieves superior performance on benchmark datasets, including
Chinese and English text images.
Related papers
- DTrOCR: Decoder-only Transformer for Optical Character Recognition [0.0]
We propose a simpler and more effective method for text recognition, known as the Decoder-only Transformer for Optical Character Recognition (DTrOCR)
This method uses a decoder-only Transformer to take advantage of a generative language model that is pre-trained on a large corpus.
Our experiments demonstrated that DTrOCR outperforms current state-of-the-art methods by a large margin in the recognition of printed, handwritten, and scene text in both English and Chinese.
arXiv Detail & Related papers (2023-08-30T12:37:03Z) - Language Quantized AutoEncoders: Towards Unsupervised Text-Image
Alignment [81.73717488887938]
Language-Quantized AutoEncoder (LQAE) learns to align text-image data in an unsupervised manner by leveraging pretrained language models.
LQAE learns to represent similar images with similar clusters of text tokens, thereby aligning these two modalities without the use of aligned text-image pairs.
This enables few-shot image classification with large language models (e.g., GPT-3) as well as linear classification of images based on BERT text features.
arXiv Detail & Related papers (2023-02-02T06:38:44Z) - PreSTU: Pre-Training for Scene-Text Understanding [49.288302725486226]
We propose PreSTU, a novel pre-training recipe dedicated to scene-text understanding (STU)
PreSTU introduces OCR-aware pre-training objectives that encourage the model to recognize text from an image and connect it to the rest of the image content.
We empirically demonstrate the effectiveness of this pre-training approach on eight visual question answering and four image captioning benchmarks.
arXiv Detail & Related papers (2022-09-12T18:29:55Z) - Image Captioning based on Feature Refinement and Reflective Decoding [0.0]
This paper introduces an encoder-decoder-based image captioning system.
It extracts spatial and global features for each region in the image using the Faster R-CNN with ResNet-101 as a backbone.
The decoder consists of an attention-based recurrent module and a reflective attention module to enhance the decoder's ability to model long-term sequential dependencies.
arXiv Detail & Related papers (2022-06-16T07:56:28Z) - Vision-Language Pre-Training for Boosting Scene Text Detectors [57.08046351495244]
We specifically adapt vision-language joint learning for scene text detection.
We propose to learn contextualized, joint representations through vision-language pre-training.
The pre-trained model is able to produce more informative representations with richer semantics.
arXiv Detail & Related papers (2022-04-29T03:53:54Z) - Language Matters: A Weakly Supervised Pre-training Approach for Scene
Text Detection and Spotting [69.77701325270047]
This paper presents a weakly supervised pre-training method that can acquire effective scene text representations.
Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features.
Experiments show that our pre-trained model improves F-score by +2.5% and +4.8% while transferring its weights to other text detection and spotting networks.
arXiv Detail & Related papers (2022-03-08T08:10:45Z) - Recurrent neural network transducer for Japanese and Chinese offline
handwritten text recognition [5.704448607986111]
We propose an RNN-Transducer model for recognizing Japanese and Chinese offline handwritten text line images.
The proposed model takes advantage of both visual and linguistic information from the input image.
Experimental results show that the proposed model achieves state-of-the-art performance on all datasets.
arXiv Detail & Related papers (2021-06-28T08:16:44Z) - Primitive Representation Learning for Scene Text Recognition [7.818765015637802]
We propose a primitive representation learning method that aims to exploit intrinsic representations of scene text images.
A Primitive REpresentation learning Network (PREN) is constructed to use the visual text representations for parallel decoding.
We also propose a framework called PREN2D to alleviate the misalignment problem in attention-based methods.
arXiv Detail & Related papers (2021-05-10T11:54:49Z) - Enhanced Modality Transition for Image Captioning [51.72997126838352]
We build a Modality Transition Module (MTM) to transfer visual features into semantic representations before forwarding them to the language model.
During the training phase, the modality transition network is optimised by the proposed modality loss.
Experiments have been conducted on the MS-COCO dataset demonstrating the effectiveness of the proposed framework.
arXiv Detail & Related papers (2021-02-23T07:20:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.