Multiple Heads are Better than One: Few-shot Font Generation with
Multiple Localized Experts
- URL: http://arxiv.org/abs/2104.00887v1
- Date: Fri, 2 Apr 2021 05:20:51 GMT
- Title: Multiple Heads are Better than One: Few-shot Font Generation with
Multiple Localized Experts
- Authors: Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim
- Abstract summary: We propose a novel FFG method, named Multiple Localized Experts Few-shot Font Generation Network (MX-Font)
MX-Font extracts multiple style features not explicitly conditioned on component labels, but automatically by multiple experts to represent different local concepts.
In our experiments, MX-Font outperforms previous state-of-the-art FFG methods in the Chinese generation and cross-lingual, e.g., Chinese to Korean, generation.
- Score: 17.97183447033118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A few-shot font generation (FFG) method has to satisfy two objectives: the
generated images should preserve the underlying global structure of the target
character and present the diverse local reference style. Existing FFG methods
aim to disentangle content and style either by extracting a universal
representation style or extracting multiple component-wise style
representations. However, previous methods either fail to capture diverse local
styles or cannot be generalized to a character with unseen components, e.g.,
unseen language systems. To mitigate the issues, we propose a novel FFG method,
named Multiple Localized Experts Few-shot Font Generation Network (MX-Font).
MX-Font extracts multiple style features not explicitly conditioned on
component labels, but automatically by multiple experts to represent different
local concepts, e.g., left-side sub-glyph. Owing to the multiple experts,
MX-Font can capture diverse local concepts and show the generalizability to
unseen languages. During training, we utilize component labels as weak
supervision to guide each expert to be specialized for different local
concepts. We formulate the component assign problem to each expert as the graph
matching problem, and solve it by the Hungarian algorithm. We also employ the
independence loss and the content-style adversarial loss to impose the
content-style disentanglement. In our experiments, MX-Font outperforms previous
state-of-the-art FFG methods in the Chinese generation and cross-lingual, e.g.,
Chinese to Korean, generation. Source code is available at
https://github.com/clovaai/mxfont.
Related papers
- Few shot font generation via transferring similarity guided global style
and quantization local style [11.817299400850176]
We present a novel font generation approach by aggregating styles from character similarity-guided global features and stylized component-level representations.
Our AFFG method could obtain a complete set of component-level style representations, and also control the global glyph characteristics.
arXiv Detail & Related papers (2023-09-02T05:05:40Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Few-shot Font Generation with Weakly Supervised Localized
Representations [17.97183447033118]
We propose a novel font generation method that learns localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only eight reference glyphs) than other state-of-the-art methods.
arXiv Detail & Related papers (2021-12-22T14:26:53Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-shot Font Generation with Localized Style Representations and
Factorization [23.781619323447003]
We propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts.
arXiv Detail & Related papers (2020-09-23T10:33:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.