SVG Vector Font Generation for Chinese Characters with Transformer
- URL: http://arxiv.org/abs/2206.10329v1
- Date: Tue, 21 Jun 2022 12:51:19 GMT
- Title: SVG Vector Font Generation for Chinese Characters with Transformer
- Authors: Haruka Aoki, Kiyoharu Aizawa
- Abstract summary: We propose a novel network architecture with Transformer and loss functions to capture structural features without differentiable rendering.
Although the dataset range was still limited to the sans-serif family, we successfully generated the Chinese vector font for the first time.
- Score: 42.46279506573065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing fonts for Chinese characters is highly labor-intensive and
time-consuming. While the latest methods successfully generate the English
alphabet vector font, despite the high demand for automatic font generation,
Chinese vector font generation has been an unsolved problem owing to its
complex shape and numerous characters. This study addressed the problem of
automatically generating Chinese vector fonts from only a single style and
content reference. We proposed a novel network architecture with Transformer
and loss functions to capture structural features without differentiable
rendering. Although the dataset range was still limited to the sans-serif
family, we successfully generated the Chinese vector font for the first time
using the proposed method.
Related papers
- Skeleton and Font Generation Network for Zero-shot Chinese Character Generation [53.08596064763731]
We propose a novel Skeleton and Font Generation Network (SFGN) to achieve a more robust Chinese character font generation.
We conduct experiments on misspelled characters, a substantial portion of which slightly differs from the common ones.
Our approach visually demonstrates the efficacy of generated images and outperforms current state-of-the-art font generation methods.
arXiv Detail & Related papers (2025-01-14T12:15:49Z) - Efficient and Scalable Chinese Vector Font Generation via Component Composition [13.499566877003408]
We introduce the first efficient and scalable Chinese vector font generation approach via component composition.
We propose a framework based on spatial transformer networks (STN) and multiple losses tailored to font characteristics.
Our experiments have demonstrated that our method significantly surpasses the state-of-the-art vector font generation methods.
arXiv Detail & Related papers (2024-04-10T06:39:18Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - VecFontSDF: Learning to Reconstruct and Synthesize High-quality Vector
Fonts via Signed Distance Functions [15.47282857047361]
This paper proposes an end-to-end trainable method, VecFontSDF, to reconstruct and synthesize high-quality vector fonts.
Based on the proposed SDF-based implicit shape representation, VecFontSDF learns to model each glyph as shape primitives enclosed by several parabolic curves.
arXiv Detail & Related papers (2023-03-22T16:14:39Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-shot Compositional Font Generation with Dual Memory [16.967987801167514]
We propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font)
We employ memory components and global-context awareness in the generator to take advantage of the compositionality.
In the experiments on Korean-handwriting fonts and Thai-printing fonts, we observe that our method generates a significantly better quality of samples with faithful stylization.
arXiv Detail & Related papers (2020-05-21T08:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.