Local Style Awareness of Font Images
- URL: http://arxiv.org/abs/2310.06337v1
- Date: Tue, 10 Oct 2023 06:13:09 GMT
- Title: Local Style Awareness of Font Images
- Authors: Daichi Haraguchi, Seiichi Uchida
- Abstract summary: When we compare fonts, we often pay attention to styles of local parts, such as serifs and curvatures.
This paper proposes an attention mechanism to find important local parts.
The local parts with larger attention are then considered important.
- Score: 8.91092846430013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When we compare fonts, we often pay attention to styles of local parts, such
as serifs and curvatures. This paper proposes an attention mechanism to find
important local parts. The local parts with larger attention are then
considered important. The proposed mechanism can be trained in a
quasi-self-supervised manner that requires no manual annotation other than
knowing that a set of character images is from the same font, such as
Helvetica. After confirming that the trained attention mechanism can find
style-relevant local parts, we utilize the resulting attention for local
style-aware font generation. Specifically, we design a new reconstruction loss
function to put more weight on the local parts with larger attention for
generating character images with more accurate style realization. This loss
function has the merit of applicability to various font generation models. Our
experimental results show that the proposed loss function improves the quality
of generated character images by several few-shot font generation models.
Related papers
- VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - DGFont++: Robust Deformable Generative Networks for Unsupervised Font
Generation [19.473023811252116]
We propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++)
To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently.
Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
arXiv Detail & Related papers (2022-12-30T14:35:10Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Few-shot Font Generation with Weakly Supervised Localized
Representations [17.97183447033118]
We propose a novel font generation method that learns localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only eight reference glyphs) than other state-of-the-art methods.
arXiv Detail & Related papers (2021-12-22T14:26:53Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Few-shot Font Generation with Localized Style Representations and
Factorization [23.781619323447003]
We propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts.
arXiv Detail & Related papers (2020-09-23T10:33:01Z) - Exploring Font-independent Features for Scene Text Recognition [22.34023249700896]
Scene text recognition (STR) has been extensively studied in last few years.
Many recently-proposed methods are specially designed to accommodate the arbitrary shape, layout and orientation of scene texts.
These methods, where font features and content features of characters are tangled, perform poorly in text recognition on scene images with texts in novel font styles.
arXiv Detail & Related papers (2020-09-16T03:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.