Weakly-Supervised Text Instance Segmentation
- URL: http://arxiv.org/abs/2303.10848v2
- Date: Thu, 23 Mar 2023 07:56:07 GMT
- Title: Weakly-Supervised Text Instance Segmentation
- Authors: Xinyan Zu, Haiyang Yu, Bin Li, Xiangyang Xue
- Abstract summary: We take the first attempt to perform weakly-supervised text instance segmentation by bridging text recognition and text segmentation.
The proposed method significantly outperforms weakly-supervised instance segmentation methods on ICDAR13-FST (18.95$%$ improvement) and TextSeg (17.80$%$ improvement) benchmarks.
- Score: 44.20745377169349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text segmentation is a challenging vision task with many downstream
applications. Current text segmentation methods require pixel-level
annotations, which are expensive in the cost of human labor and limited in
application scenarios. In this paper, we take the first attempt to perform
weakly-supervised text instance segmentation by bridging text recognition and
text segmentation. The insight is that text recognition methods provide precise
attention position of each text instance, and the attention location can feed
to both a text adaptive refinement head (TAR) and a text segmentation head.
Specifically, the proposed TAR generates pseudo labels by performing two-stage
iterative refinement operations on the attention location to fit the accurate
boundaries of the corresponding text instance. Meanwhile, the text segmentation
head takes the rough attention location to predict segmentation masks which are
supervised by the aforementioned pseudo labels. In addition, we design a
mask-augmented contrastive learning by treating our segmentation result as an
augmented version of the input text image, thus improving the visual
representation and further enhancing the performance of both recognition and
segmentation. The experimental results demonstrate that the proposed method
significantly outperforms weakly-supervised instance segmentation methods on
ICDAR13-FST (18.95$\%$ improvement) and TextSeg (17.80$\%$ improvement)
benchmarks.
Related papers
- EAFormer: Scene Text Segmentation with Edge-Aware Transformers [56.15069996649572]
Scene text segmentation aims at cropping texts from scene images, which is usually used to help generative models edit or remove texts.
We propose Edge-Aware Transformers, EAFormer, to segment texts more accurately, especially at the edge of texts.
arXiv Detail & Related papers (2024-07-24T06:00:33Z) - Efficiently Leveraging Linguistic Priors for Scene Text Spotting [63.22351047545888]
This paper proposes a method that leverages linguistic knowledge from a large text corpus to replace the traditional one-hot encoding used in auto-regressive scene text spotting and recognition models.
We generate text distributions that align well with scene text datasets, removing the need for in-domain fine-tuning.
Experimental results show that our method not only improves recognition accuracy but also enables more accurate localization of words.
arXiv Detail & Related papers (2024-02-27T01:57:09Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View
Semantic Consistency [126.88107868670767]
We propose multi-textbfView textbfConsistent learning (ViewCo) for text-supervised semantic segmentation.
We first propose text-to-views consistency modeling to learn correspondence for multiple views of the same input image.
We also propose cross-view segmentation consistency modeling to address the ambiguity issue of text supervision.
arXiv Detail & Related papers (2023-01-31T01:57:52Z) - Learning to Generate Text-grounded Mask for Open-world Semantic
Segmentation from Only Image-Text Pairs [10.484851004093919]
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images.
Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts.
We propose a novel Text-grounded Contrastive Learning framework that enables a model to directly learn region-text alignment.
arXiv Detail & Related papers (2022-12-01T18:59:03Z) - Attention-based Feature Decomposition-Reconstruction Network for Scene
Text Detection [20.85468268945721]
We propose attention-based feature decomposition-reconstruction network for scene text detection.
We use contextual information and low-level feature to enhance the performance of segmentation-based text detector.
Experiments have been conducted on two public benchmark datasets and results show that our proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-11-29T06:15:25Z) - DGST : Discriminator Guided Scene Text detector [11.817428636084305]
This paper proposes a detector framework based on the conditional generative adversarial networks to improve the segmentation effect of scene text detection.
Experiments on standard datasets demonstrate that the proposed D GST brings noticeable gain and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-02-28T01:47:36Z) - Text Perceptron: Towards End-to-End Arbitrary-Shaped Text Spotting [49.768327669098674]
We propose an end-to-end trainable text spotting approach named Text Perceptron.
It first employs an efficient segmentation-based text detector that learns the latent text reading order and boundary information.
Then a novel Shape Transform Module (abbr. STM) is designed to transform the detected feature regions into regular morphologies.
arXiv Detail & Related papers (2020-02-17T08:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.