Spatio-Temporal Handwriting Imitation
- URL: http://arxiv.org/abs/2003.10593v2
- Date: Fri, 16 Apr 2021 17:09:48 GMT
- Title: Spatio-Temporal Handwriting Imitation
- Authors: Martin Mayr, Martin Stumpf, Anguelos Nicolaou, Mathias Seuret, Andreas
Maier, Vincent Christlein
- Abstract summary: Subdividing the process into smaller subtasks makes it possible to imitate someone's handwriting with a high chance to be visually indistinguishable for humans.
We show that also a typical writer identification system can partially be fooled by the created fake handwritings.
- Score: 11.54523121769666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most people think that their handwriting is unique and cannot be imitated by
machines, especially not using completely new content. Current cursive
handwriting synthesis is visually limited or needs user interaction. We show
that subdividing the process into smaller subtasks makes it possible to imitate
someone's handwriting with a high chance to be visually indistinguishable for
humans. Therefore, a given handwritten sample will be used as the target style.
This sample is transferred to an online sequence. Then, a method for online
handwriting synthesis is used to produce a new realistic-looking text primed
with the online input sequence. This new text is then rendered and
style-adapted to the input pen. We show the effectiveness of the pipeline by
generating in- and out-of-vocabulary handwritten samples that are validated in
a comprehensive user study. Additionally, we show that also a typical writer
identification system can partially be fooled by the created fake handwritings.
Related papers
- InkSight: Offline-to-Online Handwriting Conversion by Learning to Read and Write [7.4539464693425925]
InkSight aims to empower physical note-takers to effortlessly convert their work (offline handwriting) to digital ink (online handwriting)
Our approach combines reading and writing priors, allowing training a model in the absence of large amounts of paired samples.
Our human evaluation reveals that 87% of the samples produced by our model on the challenging HierText dataset are considered as a valid tracing of the input image.
arXiv Detail & Related papers (2024-02-08T16:41:41Z) - CAPro: Webly Supervised Learning with Cross-Modality Aligned Prototypes [93.71909293023663]
Cross-modality Aligned Prototypes (CAPro) is a unified contrastive learning framework to learn visual representations with correct semantics.
CAPro achieves new state-of-the-art performance and exhibits robustness to open-set recognition.
arXiv Detail & Related papers (2023-10-15T07:20:22Z) - Sampling and Ranking for Digital Ink Generation on a tight computational
budget [69.15275423815461]
We study ways to maximize the quality of the output of a trained digital ink generative model.
We use and compare the effect of multiple sampling and ranking techniques, in the first ablation study of its kind in the digital ink domain.
arXiv Detail & Related papers (2023-06-02T09:55:15Z) - Disentangling Writer and Character Styles for Handwriting Generation [8.33116145030684]
We present the style-disentangled Transformer (SDT), which employs two complementary contrastive objectives to extract the style commonalities of reference samples.
Our empirical findings reveal that the two learned style representations provide information at different frequency magnitudes.
arXiv Detail & Related papers (2023-03-26T14:32:02Z) - Character-Aware Models Improve Visual Text Rendering [57.19915686282047]
Current image generation models struggle to reliably produce well-formed visual text.
Character-aware models provide large gains on a novel spelling task.
Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words.
arXiv Detail & Related papers (2022-12-20T18:59:23Z) - SLOGAN: Handwriting Style Synthesis for Arbitrary-Length and
Out-of-Vocabulary Text [35.83345711291558]
We propose a novel method that can synthesize parameterized and controllable handwriting Styles for arbitrary-Length and Out-of-vocabulary text.
We embed the text content by providing an easily obtainable printed style image, so that the diversity of the content can be flexibly achieved.
Our method can synthesize words that are not included in the training vocabulary and with various new styles.
arXiv Detail & Related papers (2022-02-23T12:13:27Z) - Letter-level Online Writer Identification [86.13203975836556]
We focus on a novel problem, letter-level online writer-id, which requires only a few trajectories of written letters as identification cues.
A main challenge is that a person often writes a letter in different styles from time to time.
We refer to this problem as the variance of online writing styles (Var-O-Styles)
arXiv Detail & Related papers (2021-12-06T07:21:53Z) - Data Incubation -- Synthesizing Missing Data for Handwriting Recognition [16.62493361545184]
We show how a generative model can be used to build a better recognizer through the control of content and style.
We use the framework to optimize data synthesis and demonstrate significant improvement on handwriting recognition over a model trained on real data only.
arXiv Detail & Related papers (2021-10-13T21:28:18Z) - SmartPatch: Improving Handwritten Word Imitation with Patch
Discriminators [67.54204685189255]
We propose SmartPatch, a new technique increasing the performance of current state-of-the-art methods.
We combine the well-known patch loss with information gathered from the parallel trained handwritten text recognition system.
This leads to a more enhanced local discriminator and results in more realistic and higher-quality generated handwritten words.
arXiv Detail & Related papers (2021-05-21T18:34:21Z) - Enabling Language Models to Fill in the Blanks [81.59381915581892]
We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document.
We train (or fine-tune) off-the-shelf language models on sequences containing the concatenation of artificially-masked text and the text which was masked.
We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.
arXiv Detail & Related papers (2020-05-11T18:00:03Z) - GANwriting: Content-Conditioned Generation of Styled Handwritten Word
Images [10.183347908690504]
We take a step closer to producing realistic and varied artificially rendered handwritten words.
We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content.
arXiv Detail & Related papers (2020-03-05T12:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.