FontNet: Closing the gap to font designer performance in font synthesis
- URL: http://arxiv.org/abs/2205.06512v1
- Date: Fri, 13 May 2022 08:37:10 GMT
- Title: FontNet: Closing the gap to font designer performance in font synthesis
- Authors: Ammar Ul Hassan Muhammad, Jaeyoung Choi
- Abstract summary: We propose a model, called FontNet, that learns to separate font styles in the embedding space where distances directly correspond to a measure of font similarity.
We design the network architecture and training procedure that can be adopted for any language system and can produce high-resolution font images.
- Score: 3.991334489146843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Font synthesis has been a very active topic in recent years because manual
font design requires domain expertise and is a labor-intensive and
time-consuming job. While remarkably successful, existing methods for font
synthesis have major shortcomings; they require finetuning for unobserved font
style with large reference images, the recent few-shot font synthesis methods
are either designed for specific language systems or they operate on
low-resolution images which limits their use. In this paper, we tackle this
font synthesis problem by learning the font style in the embedding space. To
this end, we propose a model, called FontNet, that simultaneously learns to
separate font styles in the embedding space where distances directly correspond
to a measure of font similarity, and translates input images into the given
observed or unobserved font style. Additionally, we design the network
architecture and training procedure that can be adopted for any language system
and can produce high-resolution font images. Thanks to this approach, our
proposed method outperforms the existing state-of-the-art font generation
methods on both qualitative and quantitative experiments.
Related papers
- GRIF-DM: Generation of Rich Impression Fonts using Diffusion Models [18.15911470339845]
We introduce a diffusion-based method, termed ourmethod, to generate fonts that vividly embody specific impressions.
Our experimental results, conducted on the MyFonts dataset, affirm that this method is capable of producing realistic, vibrant, and high-fidelity fonts.
arXiv Detail & Related papers (2024-08-14T02:26:46Z) - FontDiffuser: One-Shot Font Generation via Denoising Diffusion with
Multi-Scale Content Aggregation and Style Contrastive Learning [45.696909070215476]
FontDiffuser is a diffusion-based image-to-image one-shot font generation method.
It consistently excels on complex characters and large style changes compared to previous methods.
arXiv Detail & Related papers (2023-12-19T13:23:20Z) - DeepCalliFont: Few-shot Chinese Calligraphy Font Synthesis by
Integrating Dual-modality Generative Models [20.76773399161289]
Few-shot font generation, especially for Chinese calligraphy fonts, is a challenging and ongoing problem.
We propose a novel model, DeepCalliFont, for few-shot Chinese calligraphy font synthesis by integrating dual-modality generative models.
arXiv Detail & Related papers (2023-12-16T04:23:12Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Learning Perceptual Manifold of Fonts [7.395615703126767]
We propose the perceptual manifold of fonts to visualize the perceptual adjustment in the latent space of a generative model of fonts.
In contrast to the conventional user interface in our study, the proposed font-exploring user interface is efficient and helpful in the designated user preference.
arXiv Detail & Related papers (2021-06-17T01:22:52Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-Shot Font Generation with Deep Metric Learning [33.12829580813688]
The proposed framework introduces deep metric learning to style encoders.
We performed experiments using black-and-white and shape-distinctive font datasets.
arXiv Detail & Related papers (2020-11-04T10:12:10Z) - Few-shot Compositional Font Generation with Dual Memory [16.967987801167514]
We propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font)
We employ memory components and global-context awareness in the generator to take advantage of the compositionality.
In the experiments on Korean-handwriting fonts and Thai-printing fonts, we observe that our method generates a significantly better quality of samples with faithful stylization.
arXiv Detail & Related papers (2020-05-21T08:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.