Impressions2Font: Generating Fonts by Specifying Impressions
- URL: http://arxiv.org/abs/2103.10036v1
- Date: Thu, 18 Mar 2021 06:10:26 GMT
- Title: Impressions2Font: Generating Fonts by Specifying Impressions
- Authors: Seiya Matsuda, Akisato Kimura, Seiichi Uchida
- Abstract summary: This paper proposes Impressions2Font (Imp2Font) that generates font images with specific impressions.
Imp2Font accepts an arbitrary number of impression words as the condition to generate the font images.
- Score: 10.345810093530261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various fonts give us various impressions, which are often represented by
words. This paper proposes Impressions2Font (Imp2Font) that generates font
images with specific impressions. Imp2Font is an extended version of
conditional generative adversarial networks (GANs). More precisely, Imp2Font
accepts an arbitrary number of impression words as the condition to generate
the font images. These impression words are converted into a soft-constraint
vector by an impression embedding module built on a word embedding technique.
Qualitative and quantitative evaluations prove that Imp2Font generates font
images with higher quality than comparative methods by providing multiple
impression words or even unlearned words.
Related papers
- GRIF-DM: Generation of Rich Impression Fonts using Diffusion Models [18.15911470339845]
We introduce a diffusion-based method, termed ourmethod, to generate fonts that vividly embody specific impressions.
Our experimental results, conducted on the MyFonts dataset, affirm that this method is capable of producing realistic, vibrant, and high-fidelity fonts.
arXiv Detail & Related papers (2024-08-14T02:26:46Z) - Impression-CLIP: Contrastive Shape-Impression Embedding for Fonts [7.542892664684078]
We propose Impression-CLIP, which is a novel machine-learning model based on CLIP (Contrastive Language-Image Pre-training)
In our experiment, we perform cross-modal retrieval between fonts and impressions through co-embedding.
The results indicate that Impression-CLIP achieves better retrieval accuracy than the state-of-the-art method.
arXiv Detail & Related papers (2024-02-26T07:07:18Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - Combining OCR Models for Reading Early Modern Printed Books [2.839401411131008]
We study the usage of fine-grained font recognition on OCR for books printed from the 15th to the 18th century.
We show that OCR performance is strongly impacted by font style and that selecting fine-tuned models with font group recognition has a very positive impact on the results.
arXiv Detail & Related papers (2023-05-11T20:43:50Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Font Completion and Manipulation by Cycling Between Multi-Modality
Representations [113.26243126754704]
We innovate to explore the generation of font glyphs as 2D graphic objects with the graph as an intermediate representation.
We formulate a cross-modality cycled image-to-image structure with a graph between an image encoder and an image.
Our model generates improved results than both image-to-image baseline and previous state-of-the-art methods for glyph completion.
arXiv Detail & Related papers (2021-08-30T02:43:29Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - FONTNET: On-Device Font Understanding and Prediction Pipeline [1.5749416770494706]
We propose two engines: Font Detection Engine and Font Prediction Engine.
We develop a novel CNN architecture for identifying font style of text in images.
Second, we designed a novel algorithm for predicting similar fonts for a given query font.
Third, we have optimized and deployed the entire engine On-Device which ensures privacy and improves latency in real time applications such as instant messaging.
arXiv Detail & Related papers (2021-03-30T08:11:24Z) - Attribute2Font: Creating Fonts You Want From Attributes [32.82714291856353]
Attribute2Font is trained to perform font style transfer between any two fonts conditioned on their attribute values.
A novel unit named Attribute Attention Module is designed to make those generated glyph images better embody the prominent font attributes.
arXiv Detail & Related papers (2020-05-16T04:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.