Few-shot Font Generation by Learning Style Difference and Similarity
- URL: http://arxiv.org/abs/2301.10008v1
- Date: Tue, 24 Jan 2023 13:57:25 GMT
- Title: Few-shot Font Generation by Learning Style Difference and Similarity
- Authors: Xiao He, Mingrui Zhu, Nannan Wang, Xinbo Gao and Heng Yang
- Abstract summary: We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
- Score: 84.76381937516356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot font generation (FFG) aims to preserve the underlying global
structure of the original character while generating target fonts by referring
to a few samples. It has been applied to font library creation, a personalized
signature, and other scenarios. Existing FFG methods explicitly disentangle
content and style of reference glyphs universally or component-wisely. However,
they ignore the difference between glyphs in different styles and the
similarity of glyphs in the same style, which results in artifacts such as
local distortions and style inconsistency. To address this issue, we propose a
novel font generation approach by learning the Difference between different
styles and the Similarity of the same style (DS-Font). We introduce contrastive
learning to consider the positive and negative relationship between styles.
Specifically, we propose a multi-layer style projector for style encoding and
realize a distinctive style representation via our proposed Cluster-level
Contrastive Style (CCS) loss. In addition, we design a multi-task patch
discriminator, which comprehensively considers different areas of the image and
ensures that each style can be distinguished independently. We conduct
qualitative and quantitative evaluations comprehensively to demonstrate that
our approach achieves significantly better results than state-of-the-art
methods.
Related papers
- StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples [48.44036251656947]
Style representations aim to embed texts with similar writing styles closely and texts with different styles far apart, regardless of content.
We introduce StyleDistance, a novel approach to training stronger content-independent style embeddings.
arXiv Detail & Related papers (2024-10-16T17:25:25Z) - FontDiffuser: One-Shot Font Generation via Denoising Diffusion with
Multi-Scale Content Aggregation and Style Contrastive Learning [45.696909070215476]
FontDiffuser is a diffusion-based image-to-image one-shot font generation method.
It consistently excels on complex characters and large style changes compared to previous methods.
arXiv Detail & Related papers (2023-12-19T13:23:20Z) - Few shot font generation via transferring similarity guided global style
and quantization local style [11.817299400850176]
We present a novel font generation approach by aggregating styles from character similarity-guided global features and stylized component-level representations.
Our AFFG method could obtain a complete set of component-level style representations, and also control the global glyph characteristics.
arXiv Detail & Related papers (2023-09-02T05:05:40Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Few-shot Font Generation with Weakly Supervised Localized
Representations [17.97183447033118]
We propose a novel font generation method that learns localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only eight reference glyphs) than other state-of-the-art methods.
arXiv Detail & Related papers (2021-12-22T14:26:53Z) - Few-shot Font Generation with Localized Style Representations and
Factorization [23.781619323447003]
We propose a novel font generation method by learning localized styles, namely component-wise style representations, instead of universal styles.
Our method shows remarkably better few-shot font generation results (with only 8 reference glyph images) than other state-of-the-arts.
arXiv Detail & Related papers (2020-09-23T10:33:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.