Character-independent font identification
- URL: http://arxiv.org/abs/2001.08893v1
- Date: Fri, 24 Jan 2020 05:59:53 GMT
- Title: Character-independent font identification
- Authors: Daichi Haraguchi, Shota Harada, Brian Kenji Iwana, Yuto Shinahara,
Seiichi Uchida
- Abstract summary: We propose a method of determining if any two characters are from the same font or not.
We use a Convolutional Neural Network (CNN) trained with various font image pairs.
We then evaluate the model on a different set of fonts that are unseen by the network.
- Score: 11.86456063377268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are a countless number of fonts with various shapes and styles. In
addition, there are many fonts that only have subtle differences in features.
Due to this, font identification is a difficult task. In this paper, we propose
a method of determining if any two characters are from the same font or not.
This is difficult due to the difference between fonts typically being smaller
than the difference between alphabet classes. Additionally, the proposed method
can be used with fonts regardless of whether they exist in the training or not.
In order to accomplish this, we use a Convolutional Neural Network (CNN)
trained with various font image pairs. In the experiment, the network is
trained on image pairs of various fonts. We then evaluate the model on a
different set of fonts that are unseen by the network. The evaluation is
performed with an accuracy of 92.27%. Moreover, we analyzed the relationship
between character classes and font identification accuracy.
Related papers
- VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - Combining OCR Models for Reading Early Modern Printed Books [2.839401411131008]
We study the usage of fine-grained font recognition on OCR for books printed from the 15th to the 18th century.
We show that OCR performance is strongly impacted by font style and that selecting fine-tuned models with font group recognition has a very positive impact on the results.
arXiv Detail & Related papers (2023-05-11T20:43:50Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Font Representation Learning via Paired-glyph Matching [15.358456947574913]
We propose a novel font representation learning scheme to embed font styles into the latent space.
For the discriminative representation of a font from others, we propose a paired-glyph matching-based font representation learning model.
We show our font representation learning scheme achieves better generalization performance than the existing font representation learning techniques.
arXiv Detail & Related papers (2022-11-20T12:27:27Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Attribute2Font: Creating Fonts You Want From Attributes [32.82714291856353]
Attribute2Font is trained to perform font style transfer between any two fonts conditioned on their attribute values.
A novel unit named Attribute Attention Module is designed to make those generated glyph images better embody the prominent font attributes.
arXiv Detail & Related papers (2020-05-16T04:06:53Z) - Multiform Fonts-to-Fonts Translation via Style and Content Disentangled
Representations of Chinese Character [10.236778478360614]
The main purpose of this paper is to design a network framework that can extract and recombine the content and style of the characters.
The paper combines various depth networks such as Convolutional Neural Network, Multi-layer Perceptron and Residual Network to find the optimal model.
The result shows that those characters we have generated is very close to real characters, using Structural Similarity index and Peak Signal-to-Noise Ratio evaluation criterions.
arXiv Detail & Related papers (2020-03-28T04:30:00Z) - Neural Style Difference Transfer and Its Application to Font Generation [14.567067583556717]
We will introduce a method to create fonts automatically.
The difference of font styles between two different fonts is found and transferred to another font using neural style transfer.
arXiv Detail & Related papers (2020-01-21T03:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.