Towards Unified Scene Text Spotting based on Sequence Generation
- URL: http://arxiv.org/abs/2304.03435v1
- Date: Fri, 7 Apr 2023 01:28:08 GMT
- Title: Towards Unified Scene Text Spotting based on Sequence Generation
- Authors: Taeho Kil, Seonghyeon Kim, Sukmin Seo, Yoonsik Kim, Daehee Kim
- Abstract summary: We propose a UNIfied scene Text Spotter, called UNITS.
Our model unifies various detection formats, including quadrilaterals and polygons.
We apply starting-point prompting to enable the model to extract texts from an arbitrary starting point.
- Score: 4.437335677401287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequence generation models have recently made significant progress in
unifying various vision tasks. Although some auto-regressive models have
demonstrated promising results in end-to-end text spotting, they use specific
detection formats while ignoring various text shapes and are limited in the
maximum number of text instances that can be detected. To overcome these
limitations, we propose a UNIfied scene Text Spotter, called UNITS. Our model
unifies various detection formats, including quadrilaterals and polygons,
allowing it to detect text in arbitrary shapes. Additionally, we apply
starting-point prompting to enable the model to extract texts from an arbitrary
starting point, thereby extracting more texts beyond the number of instances it
was trained on. Experimental results demonstrate that our method achieves
competitive performance compared to state-of-the-art methods. Further analysis
shows that UNITS can extract a larger number of texts than it was trained on.
We provide the code for our method at https://github.com/clovaai/units.
Related papers
- Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - Copy Is All You Need [66.00852205068327]
We formulate text generation as progressively copying text segments from an existing text collection.
Our approach achieves better generation quality according to both automatic and human evaluations.
Our approach attains additional performance gains by simply scaling up to larger text collections.
arXiv Detail & Related papers (2023-07-13T05:03:26Z) - DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in
Transformer [94.35116535588332]
Transformer-based methods, which predict polygon points or Bezier curve control points to localize texts, are quite popular in scene text detection.
However, the used point label form implies the reading order of humans, which affects the robustness of Transformer model.
We propose DPText-DETR, which directly uses point coordinates as queries and dynamically updates them between decoder layers.
arXiv Detail & Related papers (2022-07-10T15:45:16Z) - Few Could Be Better Than All: Feature Sampling and Grouping for Scene
Text Detection [47.820683360286786]
We present a transformer-based architecture for scene text detection.
We first select a few representative features at all scales that are highly relevant to foreground text.
As each feature group corresponds to a text instance, its bounding box can be easily obtained without any post-processing operation.
arXiv Detail & Related papers (2022-03-29T04:02:31Z) - Towards End-to-End Unified Scene Text Detection and Layout Analysis [60.68100769639923]
We introduce the task of unified scene text detection and layout analysis.
The first hierarchical scene text dataset is introduced to enable this novel research task.
We also propose a novel method that is able to simultaneously detect scene text and form text clusters in a unified way.
arXiv Detail & Related papers (2022-03-28T23:35:45Z) - DEER: Detection-agnostic End-to-End Recognizer for Scene Text Spotting [11.705454066278898]
We propose a novel Detection-agnostic End-to-End Recognizer, DEER, framework.
The proposed method reduces the tight dependency between detection and recognition modules.
It achieves competitive results on regular and arbitrarily-shaped text spotting benchmarks.
arXiv Detail & Related papers (2022-03-10T02:41:05Z) - SPTS: Single-Point Text Spotting [128.52900104146028]
We show that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance.
We propose an end-to-end scene text spotting method that tackles scene text spotting as a sequence prediction task.
arXiv Detail & Related papers (2021-12-15T06:44:21Z) - Video Text Tracking With a Spatio-Temporal Complementary Model [46.99051486905713]
Text tracking is to track multiple texts in a video,and construct a trajectory for each text.
Existing methodle this task by utilizing the tracking-by-detection frame-work.
We argue that the tracking accuracy of this paradigmis severely limited in more complex scenarios.
arXiv Detail & Related papers (2021-11-09T08:23:06Z) - Which and Where to Focus: A Simple yet Accurate Framework for
Arbitrary-Shaped Nearby Text Detection in Scene Images [8.180563824325086]
We propose a simple yet effective method for accurate arbitrary-shaped nearby scene text detection.
A One-to-Many Training Scheme (OMTS) is designed to eliminate confusion and enable the proposals to learn more appropriate groundtruths.
We also propose a Proposal Feature Attention Module (PFAM) to exploit more effective features for each proposal.
arXiv Detail & Related papers (2021-09-08T06:25:37Z) - Scene Text Detection with Scribble Lines [59.698806258671105]
We propose to annotate texts by scribble lines instead of polygons for text detection.
It is a general labeling method for texts with various shapes and requires low labeling costs.
Experiments show that the proposed method bridges the performance gap between the weakly labeling method and the original polygon-based labeling methods.
arXiv Detail & Related papers (2020-12-09T13:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.