Few shot font generation via transferring similarity guided global style
and quantization local style
- URL: http://arxiv.org/abs/2309.00827v2
- Date: Thu, 14 Sep 2023 05:33:44 GMT
- Title: Few shot font generation via transferring similarity guided global style
and quantization local style
- Authors: Wei Pan, Anna Zhu, Xinyu Zhou, Brian Kenji Iwana, Shilin Li
- Abstract summary: We present a novel font generation approach by aggregating styles from character similarity-guided global features and stylized component-level representations.
Our AFFG method could obtain a complete set of component-level style representations, and also control the global glyph characteristics.
- Score: 11.817299400850176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic few-shot font generation (AFFG), aiming at generating new fonts
with only a few glyph references, reduces the labor cost of manually designing
fonts. However, the traditional AFFG paradigm of style-content disentanglement
cannot capture the diverse local details of different fonts. So, many
component-based approaches are proposed to tackle this problem. The issue with
component-based approaches is that they usually require special pre-defined
glyph components, e.g., strokes and radicals, which is infeasible for AFFG of
different languages. In this paper, we present a novel font generation approach
by aggregating styles from character similarity-guided global features and
stylized component-level representations. We calculate the similarity scores of
the target character and the referenced samples by measuring the distance along
the corresponding channels from the content features, and assigning them as the
weights for aggregating the global style features. To better capture the local
styles, a cross-attention-based style transfer module is adopted to transfer
the styles of reference glyphs to the components, where the components are
self-learned discrete latent codes through vector quantization without manual
definition. With these designs, our AFFG method could obtain a complete set of
component-level style representations, and also control the global glyph
characteristics. The experimental results reflect the effectiveness and
generalization of the proposed method on different linguistic scripts, and also
show its superiority when compared with other state-of-the-art methods. The
source code can be found at https://github.com/awei669/VQ-Font.
Related papers
- VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Few-shot Font Generation with Weakly Supervised Localized
Representations [17.97183447033118]
We propose a novel font generation method that learns localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only eight reference glyphs) than other state-of-the-art methods.
arXiv Detail & Related papers (2021-12-22T14:26:53Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-shot Font Generation with Localized Style Representations and
Factorization [23.781619323447003]
We propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts.
arXiv Detail & Related papers (2020-09-23T10:33:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.