Progressive Scene Text Erasing with Self-Supervision
- URL: http://arxiv.org/abs/2207.11469v2
- Date: Fri, 28 Apr 2023 09:36:53 GMT
- Title: Progressive Scene Text Erasing with Self-Supervision
- Authors: Xiangcheng Du and Zhao Zhou and Yingbin Zheng and Xingjiao Wu and
Tianlong Ma and Cheng Jin
- Abstract summary: Scene text erasing seeks to erase text contents from scene images.
Current state-of-the-art text erasing models are trained on large-scale synthetic data.
We employ self-supervision for feature representation on unlabeled real-world scene text images.
- Score: 7.118419154170154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene text erasing seeks to erase text contents from scene images and current
state-of-the-art text erasing models are trained on large-scale synthetic data.
Although data synthetic engines can provide vast amounts of annotated training
samples, there are differences between synthetic and real-world data. In this
paper, we employ self-supervision for feature representation on unlabeled
real-world scene text images. A novel pretext task is designed to keep
consistent among text stroke masks of image variants. We design the Progressive
Erasing Network in order to remove residual texts. The scene text is erased
progressively by leveraging the intermediate generated results which provide
the foundation for subsequent higher quality results. Experiments show that our
method significantly improves the generalization of the text erasing task and
achieves state-of-the-art performance on public benchmarks.
Related papers
- DeepEraser: Deep Iterative Context Mining for Generic Text Eraser [103.39279154750172]
DeepEraser is a recurrent architecture that erases the text in an image via iterative operations.
DeepEraser is notably compact with only 1.4M parameters and trained in an end-to-end manner.
arXiv Detail & Related papers (2024-02-29T12:39:04Z) - Efficiently Leveraging Linguistic Priors for Scene Text Spotting [63.22351047545888]
This paper proposes a method that leverages linguistic knowledge from a large text corpus to replace the traditional one-hot encoding used in auto-regressive scene text spotting and recognition models.
We generate text distributions that align well with scene text datasets, removing the need for in-domain fine-tuning.
Experimental results show that our method not only improves recognition accuracy but also enables more accurate localization of words.
arXiv Detail & Related papers (2024-02-27T01:57:09Z) - Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - Exploring Stroke-Level Modifications for Scene Text Editing [86.33216648792964]
Scene text editing (STE) aims to replace text with the desired one while preserving background and styles of the original text.
Previous methods of editing the whole image have to learn different translation rules of background and text regions simultaneously.
We propose a novel network by MOdifying Scene Text image at strokE Level (MOSTEL)
arXiv Detail & Related papers (2022-12-05T02:10:59Z) - SpaText: Spatio-Textual Representation for Controllable Image Generation [61.89548017729586]
SpaText is a new method for text-to-image generation using open-vocabulary scene control.
In addition to a global text prompt that describes the entire scene, the user provides a segmentation map.
We show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-conditional-based.
arXiv Detail & Related papers (2022-11-25T18:59:10Z) - Reading and Writing: Discriminative and Generative Modeling for
Self-Supervised Text Recognition [101.60244147302197]
We introduce contrastive learning and masked image modeling to learn discrimination and generation of text images.
Our method outperforms previous self-supervised text recognition methods by 10.2%-20.2% on irregular scene text recognition datasets.
Our proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3% on 11 benchmarks, with similar model size.
arXiv Detail & Related papers (2022-07-01T03:50:26Z) - Self-Supervised Text Erasing with Controllable Image Synthesis [33.60862002159276]
We study an unsupervised scenario by proposing a novel Self-supervised Text Erasing framework.
We first design a style-aware image synthesis function to generate synthetic images with diverse styled texts.
To bridge the text style gap between the synthetic and real-world data, a policy network is constructed to control the synthetic mechanisms.
The proposed method has been extensively evaluated with both PosterErase and the widely-used SCUT-Entext dataset.
arXiv Detail & Related papers (2022-04-27T07:21:55Z) - Stroke-Based Scene Text Erasing Using Synthetic Data [0.0]
Scene text erasing can replace text regions with reasonable content in natural images.
The lack of a large-scale real-world scene-text removal dataset allows the existing methods to not work in full strength.
We enhance and make full use of the synthetic text and consequently train our model only on the dataset generated by the improved synthetic text engine.
This model can partially erase text instances in a scene image with a bounding box provided or work with an existing scene text detector for automatic scene text erasing.
arXiv Detail & Related papers (2021-04-23T09:29:41Z) - Scene text removal via cascaded text stroke detection and erasing [19.306751704904705]
Recent learning-based approaches show promising performance improvement for scene text removal task.
We propose a novel "end-to-end" framework based on accurate text stroke detection.
arXiv Detail & Related papers (2020-11-19T11:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.