DeepWriteSYN: On-Line Handwriting Synthesis via Deep Short-Term
Representations
- URL: http://arxiv.org/abs/2009.06308v2
- Date: Tue, 8 Dec 2020 11:03:11 GMT
- Title: DeepWriteSYN: On-Line Handwriting Synthesis via Deep Short-Term
Representations
- Authors: Ruben Tolosana, Paula Delgado-Santos, Andres Perez-Uribe, Ruben
Vera-Rodriguez, Julian Fierrez, Aythami Morales
- Abstract summary: DeepWriteSYN is a novel on-line handwriting approach via deep short-term representations.
It can generate realistic handwriting variations of a given handwritten structure corresponding to the natural variation within a given population or a given subject.
- Score: 14.498981800711302
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study proposes DeepWriteSYN, a novel on-line handwriting synthesis
approach via deep short-term representations. It comprises two modules: i) an
optional and interchangeable temporal segmentation, which divides the
handwriting into short-time segments consisting of individual or multiple
concatenated strokes; and ii) the on-line synthesis of those short-time
handwriting segments, which is based on a sequence-to-sequence Variational
Autoencoder (VAE). The main advantages of the proposed approach are that the
synthesis is carried out in short-time segments (that can run from a character
fraction to full characters) and that the VAE can be trained on a configurable
handwriting dataset. These two properties give a lot of flexibility to our
synthesiser, e.g., as shown in our experiments, DeepWriteSYN can generate
realistic handwriting variations of a given handwritten structure corresponding
to the natural variation within a given population or a given subject. These
two cases are developed experimentally for individual digits and handwriting
signatures, respectively, achieving in both cases remarkable results.
Also, we provide experimental results for the task of on-line signature
verification showing the high potential of DeepWriteSYN to improve
significantly one-shot learning scenarios. To the best of our knowledge, this
is the first synthesis approach capable of generating realistic on-line
handwriting in the short term (including handwritten signatures) via deep
learning. This can be very useful as a module toward long-term realistic
handwriting generation either completely synthetic or as natural variation of
given handwriting samples.
Related papers
- Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Towards Writing Style Adaptation in Handwriting Recognition [0.0]
We explore models with writer-dependent parameters which take the writer's identity as an additional input.
We propose a Writer Style Block (WSB), an adaptive instance normalization layer conditioned on learned embeddings of the partitions.
We show that our approach outperforms a baseline with no WSB in a writer-dependent scenario and that it is possible to estimate embeddings for new writers.
arXiv Detail & Related papers (2023-02-13T12:36:17Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - SLOGAN: Handwriting Style Synthesis for Arbitrary-Length and
Out-of-Vocabulary Text [35.83345711291558]
We propose a novel method that can synthesize parameterized and controllable handwriting Styles for arbitrary-Length and Out-of-vocabulary text.
We embed the text content by providing an easily obtainable printed style image, so that the diversity of the content can be flexibly achieved.
Our method can synthesize words that are not included in the training vocabulary and with various new styles.
arXiv Detail & Related papers (2022-02-23T12:13:27Z) - Letter-level Online Writer Identification [86.13203975836556]
We focus on a novel problem, letter-level online writer-id, which requires only a few trajectories of written letters as identification cues.
A main challenge is that a person often writes a letter in different styles from time to time.
We refer to this problem as the variance of online writing styles (Var-O-Styles)
arXiv Detail & Related papers (2021-12-06T07:21:53Z) - Data Incubation -- Synthesizing Missing Data for Handwriting Recognition [16.62493361545184]
We show how a generative model can be used to build a better recognizer through the control of content and style.
We use the framework to optimize data synthesis and demonstrate significant improvement on handwriting recognition over a model trained on real data only.
arXiv Detail & Related papers (2021-10-13T21:28:18Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Generating Handwriting via Decoupled Style Descriptors [28.31500214381889]
We introduce the Decoupled Style Descriptor model for handwriting.
It factors both character- and writer-level styles and allows our model to represent an overall greater space of styles.
In experiments, our generated results were preferred over a state of the art baseline method 88% of the time.
arXiv Detail & Related papers (2020-08-26T02:52:48Z) - Text Recognition in Real Scenarios with a Few Labeled Samples [55.07859517380136]
Scene text recognition (STR) is still a hot research topic in computer vision field.
This paper proposes a few-shot adversarial sequence domain adaptation (FASDA) approach to build sequence adaptation.
Our approach can maximize the character-level confusion between the source domain and the target domain.
arXiv Detail & Related papers (2020-06-22T13:03:01Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z) - Spatio-Temporal Handwriting Imitation [11.54523121769666]
Subdividing the process into smaller subtasks makes it possible to imitate someone's handwriting with a high chance to be visually indistinguishable for humans.
We show that also a typical writer identification system can partially be fooled by the created fake handwritings.
arXiv Detail & Related papers (2020-03-24T00:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.