Few-Shot Font Generation with Deep Metric Learning
- URL: http://arxiv.org/abs/2011.02206v1
- Date: Wed, 4 Nov 2020 10:12:10 GMT
- Title: Few-Shot Font Generation with Deep Metric Learning
- Authors: Haruka Aoki, Koki Tsubota, Hikaru Ikuta, Kiyoharu Aizawa
- Abstract summary: The proposed framework introduces deep metric learning to style encoders.
We performed experiments using black-and-white and shape-distinctive font datasets.
- Score: 33.12829580813688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing fonts for languages with a large number of characters, such as
Japanese and Chinese, is an extremely labor-intensive and time-consuming task.
In this study, we addressed the problem of automatically generating Japanese
typographic fonts from only a few font samples, where the synthesized glyphs
are expected to have coherent characteristics, such as skeletons, contours, and
serifs. Existing methods often fail to generate fine glyph images when the
number of style reference glyphs is extremely limited. Herein, we proposed a
simple but powerful framework for extracting better style features. This
framework introduces deep metric learning to style encoders. We performed
experiments using black-and-white and shape-distinctive font datasets and
demonstrated the effectiveness of the proposed framework.
Related papers
- Decoupling Layout from Glyph in Online Chinese Handwriting Generation [6.566541829858544]
We develop a text line layout generator and stylized font synthesizer.
The layout generator performs in-context-like learning based on the text content and the provided style references to generate positions for each glyph autoregressively.
The font synthesizer which consists of a character embedding dictionary, a multi-scale calligraphy style encoder, and a 1D U-Net based diffusion denoiser will generate each font on its position while imitating the calligraphy style extracted from the given style references.
arXiv Detail & Related papers (2024-10-03T08:46:17Z) - DeepCalliFont: Few-shot Chinese Calligraphy Font Synthesis by
Integrating Dual-modality Generative Models [20.76773399161289]
Few-shot font generation, especially for Chinese calligraphy fonts, is a challenging and ongoing problem.
We propose a novel model, DeepCalliFont, for few-shot Chinese calligraphy font synthesis by integrating dual-modality generative models.
arXiv Detail & Related papers (2023-12-16T04:23:12Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - ZiGAN: Fine-grained Chinese Calligraphy Font Generation via a Few-shot
Style Transfer Approach [7.318027179922774]
ZiGAN is a powerful end-to-end Chinese calligraphy font generation framework.
It does not require any manual operation or redundant preprocessing to generate fine-grained target-style characters.
Our method has a state-of-the-art generalization ability in few-shot Chinese character style transfer.
arXiv Detail & Related papers (2021-08-08T09:50:20Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-shot Font Generation with Localized Style Representations and
Factorization [23.781619323447003]
We propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts.
arXiv Detail & Related papers (2020-09-23T10:33:01Z) - Few-shot Compositional Font Generation with Dual Memory [16.967987801167514]
We propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font)
We employ memory components and global-context awareness in the generator to take advantage of the compositionality.
In the experiments on Korean-handwriting fonts and Thai-printing fonts, we observe that our method generates a significantly better quality of samples with faithful stylization.
arXiv Detail & Related papers (2020-05-21T08:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.