Attribute2Font: Creating Fonts You Want From Attributes
- URL: http://arxiv.org/abs/2005.07865v1
- Date: Sat, 16 May 2020 04:06:53 GMT
- Title: Attribute2Font: Creating Fonts You Want From Attributes
- Authors: Yizhi Wang, Yue Gao, Zhouhui Lian
- Abstract summary: Attribute2Font is trained to perform font style transfer between any two fonts conditioned on their attribute values.
A novel unit named Attribute Attention Module is designed to make those generated glyph images better embody the prominent font attributes.
- Score: 32.82714291856353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Font design is now still considered as an exclusive privilege of professional
designers, whose creativity is not possessed by existing software systems.
Nevertheless, we also notice that most commercial font products are in fact
manually designed by following specific requirements on some attributes of
glyphs, such as italic, serif, cursive, width, angularity, etc. Inspired by
this fact, we propose a novel model, Attribute2Font, to automatically create
fonts by synthesizing visually-pleasing glyph images according to
user-specified attributes and their corresponding values. To the best of our
knowledge, our model is the first one in the literature which is capable of
generating glyph images in new font styles, instead of retrieving existing
fonts, according to given values of specified font attributes. Specifically,
Attribute2Font is trained to perform font style transfer between any two fonts
conditioned on their attribute values. After training, our model can generate
glyph images in accordance with an arbitrary set of font attribute values.
Furthermore, a novel unit named Attribute Attention Module is designed to make
those generated glyph images better embody the prominent font attributes.
Considering that the annotations of font attribute values are extremely
expensive to obtain, a semi-supervised learning scheme is also introduced to
exploit a large number of unlabeled fonts. Experimental results demonstrate
that our model achieves impressive performance on many tasks, such as creating
glyph images in new font styles, editing existing fonts, interpolation among
different fonts, etc.
Related papers
- VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Font Representation Learning via Paired-glyph Matching [15.358456947574913]
We propose a novel font representation learning scheme to embed font styles into the latent space.
For the discriminative representation of a font from others, we propose a paired-glyph matching-based font representation learning model.
We show our font representation learning scheme achieves better generalization performance than the existing font representation learning techniques.
arXiv Detail & Related papers (2022-11-20T12:27:27Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Font Completion and Manipulation by Cycling Between Multi-Modality
Representations [113.26243126754704]
We innovate to explore the generation of font glyphs as 2D graphic objects with the graph as an intermediate representation.
We formulate a cross-modality cycled image-to-image structure with a graph between an image encoder and an image.
Our model generates improved results than both image-to-image baseline and previous state-of-the-art methods for glyph completion.
arXiv Detail & Related papers (2021-08-30T02:43:29Z) - FONTNET: On-Device Font Understanding and Prediction Pipeline [1.5749416770494706]
We propose two engines: Font Detection Engine and Font Prediction Engine.
We develop a novel CNN architecture for identifying font style of text in images.
Second, we designed a novel algorithm for predicting similar fonts for a given query font.
Third, we have optimized and deployed the entire engine On-Device which ensures privacy and improves latency in real time applications such as instant messaging.
arXiv Detail & Related papers (2021-03-30T08:11:24Z) - Impressions2Font: Generating Fonts by Specifying Impressions [10.345810093530261]
This paper proposes Impressions2Font (Imp2Font) that generates font images with specific impressions.
Imp2Font accepts an arbitrary number of impression words as the condition to generate the font images.
arXiv Detail & Related papers (2021-03-18T06:10:26Z) - Few-shot Compositional Font Generation with Dual Memory [16.967987801167514]
We propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font)
We employ memory components and global-context awareness in the generator to take advantage of the compositionality.
In the experiments on Korean-handwriting fonts and Thai-printing fonts, we observe that our method generates a significantly better quality of samples with faithful stylization.
arXiv Detail & Related papers (2020-05-21T08:13:40Z) - Neural Style Difference Transfer and Its Application to Font Generation [14.567067583556717]
We will introduce a method to create fonts automatically.
The difference of font styles between two different fonts is found and transferred to another font using neural style transfer.
arXiv Detail & Related papers (2020-01-21T03:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.