Self-supervised Scene Text Segmentation with Object-centric Layered
Representations Augmented by Text Regions
- URL: http://arxiv.org/abs/2308.13178v1
- Date: Fri, 25 Aug 2023 05:00:05 GMT
- Title: Self-supervised Scene Text Segmentation with Object-centric Layered
Representations Augmented by Text Regions
- Authors: Yibo Wang, Yunhu Ye, Yuanpeng Mao, Yanwei Yu and Yuanping Song
- Abstract summary: We propose a self-supervised scene text segmentation algorithm with layered decoupling of representations derived from the object-centric manner to segment images into texts and background.
On several public scene text datasets, our method outperforms the state-of-the-art unsupervised segmentation algorithms.
- Score: 22.090074821554754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text segmentation tasks have a very wide range of application values, such as
image editing, style transfer, watermark removal, etc.However, existing public
datasets are of poor quality of pixel-level labels that have been shown to be
notoriously costly to acquire, both in terms of money and time. At the same
time, when pretraining is performed on synthetic datasets, the data
distribution of the synthetic datasets is far from the data distribution in the
real scene. These all pose a huge challenge to the current pixel-level text
segmentation algorithms.To alleviate the above problems, we propose a
self-supervised scene text segmentation algorithm with layered decoupling of
representations derived from the object-centric manner to segment images into
texts and background. In our method, we propose two novel designs which include
Region Query Module and Representation Consistency Constraints adapting to the
unique properties of text as complements to Auto Encoder, which improves the
network's sensitivity to texts.For this unique design, we treat the
polygon-level masks predicted by the text localization model as extra input
information, and neither utilize any pixel-level mask annotations for training
stage nor pretrain on synthetic datasets.Extensive experiments show the
effectiveness of the method proposed. On several public scene text datasets,
our method outperforms the state-of-the-art unsupervised segmentation
algorithms.
Related papers
- Decoder Pre-Training with only Text for Scene Text Recognition [54.93037783663204]
Scene text recognition (STR) pre-training methods have achieved remarkable progress, primarily relying on synthetic datasets.
We introduce a novel method named Decoder Pre-training with only text for STR (DPTR)
DPTR treats text embeddings produced by the CLIP text encoder as pseudo visual embeddings and uses them to pre-train the decoder.
arXiv Detail & Related papers (2024-08-11T06:36:42Z) - WAS: Dataset and Methods for Artistic Text Segmentation [57.61335995536524]
This paper focuses on the more challenging task of artistic text segmentation and constructs a real artistic text segmentation dataset.
We propose a decoder with the layer-wise momentum query to prevent the model from ignoring stroke regions of special shapes.
We also propose a skeleton-assisted head to guide the model to focus on the global structure.
arXiv Detail & Related papers (2024-07-31T18:29:36Z) - Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - Beyond Generation: Harnessing Text to Image Models for Object Detection
and Segmentation [29.274362919954218]
We propose a new paradigm to automatically generate training data with accurate labels at scale.
The proposed approach decouples training data generation into foreground object generation, and contextually coherent background generation.
We demonstrate the advantages of our approach on five object detection and segmentation datasets.
arXiv Detail & Related papers (2023-09-12T04:41:45Z) - SpaText: Spatio-Textual Representation for Controllable Image Generation [61.89548017729586]
SpaText is a new method for text-to-image generation using open-vocabulary scene control.
In addition to a global text prompt that describes the entire scene, the user provides a segmentation map.
We show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-conditional-based.
arXiv Detail & Related papers (2022-11-25T18:59:10Z) - SceneComposer: Any-Level Semantic Image Synthesis [80.55876413285587]
We propose a new framework for conditional image synthesis from semantic layouts of any precision levels.
The framework naturally reduces to text-to-image (T2I) at the lowest level with no shape information, and it becomes segmentation-to-image (S2I) at the highest level.
We introduce several novel techniques to address the challenges coming with this new setup.
arXiv Detail & Related papers (2022-11-21T18:59:05Z) - CRIS: CLIP-Driven Referring Image Segmentation [71.56466057776086]
We propose an end-to-end CLIP-Driven Referring Image framework (CRIS)
CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment.
Our proposed framework significantly outperforms the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-11-30T07:29:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.