Neural Style Difference Transfer and Its Application to Font Generation
- URL: http://arxiv.org/abs/2001.07321v1
- Date: Tue, 21 Jan 2020 03:32:44 GMT
- Title: Neural Style Difference Transfer and Its Application to Font Generation
- Authors: Gantugs Atarsaikhan, Brian Kenji Iwana and Seiichi Uchida
- Abstract summary: We will introduce a method to create fonts automatically.
The difference of font styles between two different fonts is found and transferred to another font using neural style transfer.
- Score: 14.567067583556717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing fonts requires a great deal of time and effort. It requires
professional skills, such as sketching, vectorizing, and image editing.
Additionally, each letter has to be designed individually. In this paper, we
will introduce a method to create fonts automatically. In our proposed method,
the difference of font styles between two different fonts is found and
transferred to another font using neural style transfer. Neural style transfer
is a method of stylizing the contents of an image with the styles of another
image. We proposed a novel neural style difference and content difference loss
for the neural style transfer. With these losses, new fonts can be generated by
adding or removing font styles from a font. We provided experimental results
with various combinations of input fonts and discussed limitations and future
development for the proposed method.
Related papers
- FontDiffuser: One-Shot Font Generation via Denoising Diffusion with
Multi-Scale Content Aggregation and Style Contrastive Learning [45.696909070215476]
FontDiffuser is a diffusion-based image-to-image one-shot font generation method.
It consistently excels on complex characters and large style changes compared to previous methods.
arXiv Detail & Related papers (2023-12-19T13:23:20Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Font Representation Learning via Paired-glyph Matching [15.358456947574913]
We propose a novel font representation learning scheme to embed font styles into the latent space.
For the discriminative representation of a font from others, we propose a paired-glyph matching-based font representation learning model.
We show our font representation learning scheme achieves better generalization performance than the existing font representation learning techniques.
arXiv Detail & Related papers (2022-11-20T12:27:27Z) - FontNet: Closing the gap to font designer performance in font synthesis [3.991334489146843]
We propose a model, called FontNet, that learns to separate font styles in the embedding space where distances directly correspond to a measure of font similarity.
We design the network architecture and training procedure that can be adopted for any language system and can produce high-resolution font images.
arXiv Detail & Related papers (2022-05-13T08:37:10Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Automatic Generation of Chinese Handwriting via Fonts Style
Representation Learning [7.196855749519688]
This system can generate new style fonts by latent style-related embeding variables.
Our method is simpler and more effective than other methods, which will help to improve the font design efficiency.
arXiv Detail & Related papers (2020-03-27T23:34:01Z) - Character-independent font identification [11.86456063377268]
We propose a method of determining if any two characters are from the same font or not.
We use a Convolutional Neural Network (CNN) trained with various font image pairs.
We then evaluate the model on a different set of fonts that are unseen by the network.
arXiv Detail & Related papers (2020-01-24T05:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.