BOTD: Bold Outline Text Detector
- URL: http://arxiv.org/abs/2011.14714v6
- Date: Fri, 21 May 2021 10:47:02 GMT
- Title: BOTD: Bold Outline Text Detector
- Authors: Chuang Yang, Zhitong Xiong, Mulin Chen, Qi Wang, and Xuelong Li
- Abstract summary: We propose a new one-stage text detector, termed as Bold Outline Text Detector (BOTD)
BOTD is able to process the arbitrary-shaped text with low model complexity.
Experimental results on three real-world benchmarks show the state-of-the-art performance of BOTD.
- Score: 85.33700624095181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, text detection has attracted sufficient attention in the field of
computer vision and artificial intelligence. Among the existing approaches,
regression-based models are limited to handle the texts with arbitrary shapes,
while segmentation-based algorithms have high computational costs and suffer
from the text adhesion problem. In this paper, we propose a new one-stage text
detector, termed as Bold Outline Text Detector (BOTD), which is able to process
the arbitrary-shaped text with low model complexity. Different from previous
works, BOTD utilizes the Polar Minimum Distance (PMD) to encode the shortest
distance between the center point and the contour of the text instance, and
generates a Center Mask (CM) for each text instance. After learning the PMD
heat map and CM map, the final results can be obtained with a simple Text
Reconstruction Module (TRM). Since the CM resides within the text box exactly,
the text adhesion problem is avoided naturally. Meanwhile, all the points on
the text contour share the same PMD, so the complexity of BOTD is much lower
than existing segmentation-based methods. Experimental results on three
real-world benchmarks show the state-of-the-art performance of BOTD.
Related papers
- Spotlight Text Detector: Spotlight on Candidate Regions Like a Camera [31.180352896153682]
We propose an effective spotlight text detector (STD) for scene texts.
It consists of a spotlight calibration module (SCM) and a multivariate information extraction module (MIEM)
Our STD is superior to existing state-of-the-art methods on various datasets.
arXiv Detail & Related papers (2024-09-25T11:19:09Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Contextual Text Block Detection towards Scene Text Understanding [85.40898487745272]
This paper presents contextual text detection, a new setup that detects contextual text blocks (CTBs) for better understanding of texts in scenes.
We formulate the new setup by a dual detection task which first detects integral text units and then groups them into a CTB.
To this end, we design a novel scene text clustering technique that treats integral text units as tokens and groups them (belonging to the same CTB) into an ordered token sequence.
arXiv Detail & Related papers (2022-07-26T14:59:25Z) - TextDCT: Arbitrary-Shaped Text Detection via Discrete Cosine Transform
Mask [19.269070203448187]
Arbitrary-shaped scene text detection is a challenging task due to the variety of text changes in font, size, color, and orientation.
We propose a novel light-weight anchor-free text detection framework called TextDCT, which adopts the discrete cosine transform (DCT) to encode the text masks as compact vectors.
TextDCT achieves F-measure of 85.1 at 17.2 frames per second (FPS) and F-measure of 84.9 at 15.1 FPS for CTW1500 and Total-Text datasets, respectively.
arXiv Detail & Related papers (2022-06-27T15:42:25Z) - Arbitrary Shape Text Detection using Transformers [2.294014185517203]
We propose an end-to-end trainable architecture for arbitrary-shaped text detection using Transformers (DETR)
At its core, our proposed method leverages a bounding box loss function that accurately measures the arbitrary detected text regions' changes in scale and aspect ratio.
We evaluate our proposed model using Total-Text and CTW-1500 datasets for curved text, and MSRA-TD500 and ICDAR15 datasets for multi-oriented text.
arXiv Detail & Related papers (2022-02-22T22:36:29Z) - CORE-Text: Improving Scene Text Detection with Contrastive Relational
Reasoning [65.57338873921168]
Localizing text instances in natural scenes is regarded as a fundamental challenge in computer vision.
In this work, we quantitatively analyze the sub-text problem and present a simple yet effective design, COntrastive RElation (CORE) module.
We integrate the CORE module into a two-stage text detector of Mask R-CNN and devise our text detector CORE-Text.
arXiv Detail & Related papers (2021-12-14T16:22:25Z) - CentripetalText: An Efficient Text Instance Representation for Scene
Text Detection [19.69057252363207]
We propose an efficient text instance representation named CentripetalText (CT)
CT decomposes text instances into the combination of text kernels and centripetal shifts.
For the task of scene text detection, our approach achieves superior or competitive performance compared to other existing methods.
arXiv Detail & Related papers (2021-07-13T09:34:18Z) - All you need is a second look: Towards Tighter Arbitrary shape text
detection [80.85188469964346]
Long curve text instances tend to be fragmented because of the limited receptive field size of CNN.
Simple representations using rectangle or quadrangle bounding boxes fall short when dealing with more challenging arbitrary-shaped texts.
textitNASK reconstructs text instances with a more tighter representation using the predicted geometrical attributes.
arXiv Detail & Related papers (2020-04-26T17:03:41Z) - Text Perceptron: Towards End-to-End Arbitrary-Shaped Text Spotting [49.768327669098674]
We propose an end-to-end trainable text spotting approach named Text Perceptron.
It first employs an efficient segmentation-based text detector that learns the latent text reading order and boundary information.
Then a novel Shape Transform Module (abbr. STM) is designed to transform the detected feature regions into regular morphologies.
arXiv Detail & Related papers (2020-02-17T08:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.