Few-shot Font Generation with Weakly Supervised Localized
Representations
- URL: http://arxiv.org/abs/2112.11895v1
- Date: Wed, 22 Dec 2021 14:26:53 GMT
- Title: Few-shot Font Generation with Weakly Supervised Localized
Representations
- Authors: Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
- Abstract summary: We propose a novel font generation method that learns localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only eight reference glyphs) than other state-of-the-art methods.
- Score: 17.97183447033118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic few-shot font generation aims to solve a well-defined, real-world
problem because manual font designs are expensive and sensitive to the
expertise of designers. Existing methods learn to disentangle style and content
elements by developing a universal style representation for each font style.
However, this approach limits the model in representing diverse local styles,
because it is unsuitable for complicated letter systems, for example, Chinese,
whose characters consist of a varying number of components (often called
"radical") -- with a highly complex structure. In this paper, we propose a
novel font generation method that learns localized styles, namely
component-wise style representations, instead of universal styles. The proposed
style representations enable the synthesis of complex local details in text
designs. However, learning component-wise styles solely from a few reference
glyphs is infeasible when a target script has a large number of components, for
example, over 200 for Chinese. To reduce the number of required reference
glyphs, we represent component-wise styles by a product of component and style
factors, inspired by low-rank matrix factorization. Owing to the combination of
strong representation and a compact factorization strategy, our method shows
remarkably better few-shot font generation results (with only eight reference
glyphs) than other state-of-the-art methods. Moreover, strong locality
supervision, for example, location of each component, skeleton, or strokes, was
not utilized. The source code is available at https://github.com/clovaai/lffont
and https://github.com/clovaai/fewshot-font-generation.
Related papers
- FontDiffuser: One-Shot Font Generation via Denoising Diffusion with
Multi-Scale Content Aggregation and Style Contrastive Learning [45.696909070215476]
FontDiffuser is a diffusion-based image-to-image one-shot font generation method.
It consistently excels on complex characters and large style changes compared to previous methods.
arXiv Detail & Related papers (2023-12-19T13:23:20Z) - Few shot font generation via transferring similarity guided global style
and quantization local style [11.817299400850176]
We present a novel font generation approach by aggregating styles from character similarity-guided global features and stylized component-level representations.
Our AFFG method could obtain a complete set of component-level style representations, and also control the global glyph characteristics.
arXiv Detail & Related papers (2023-09-02T05:05:40Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font
Generation [13.569449355929574]
We propose a self-supervised cross-modality pre-training strategy and a cross-modality transformer-based encoder.
The encoder is conditioned jointly on the glyph image and the corresponding stroke labels.
It only requires one reference glyph and achieves the lowest rate of bad cases in the few-shot font generation task 28% lower than the second best.
arXiv Detail & Related papers (2022-04-11T13:34:40Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-Shot Font Generation with Deep Metric Learning [33.12829580813688]
The proposed framework introduces deep metric learning to style encoders.
We performed experiments using black-and-white and shape-distinctive font datasets.
arXiv Detail & Related papers (2020-11-04T10:12:10Z) - Few-shot Font Generation with Localized Style Representations and
Factorization [23.781619323447003]
We propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts.
arXiv Detail & Related papers (2020-09-23T10:33:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.