Which Parts determine the Impression of the Font?
- URL: http://arxiv.org/abs/2103.14216v1
- Date: Fri, 26 Mar 2021 02:13:24 GMT
- Title: Which Parts determine the Impression of the Font?
- Authors: M.Ueda, A.Kimura, S.Uchida
- Abstract summary: Various fonts give different impressions, such as legible, rough, and comic-text.
By focusing on local shapes instead of the whole letter shape, we can realize letter-shape independent and more general analysis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various fonts give different impressions, such as legible, rough, and
comic-text.This paper aims to analyze the correlation between the local shapes,
or parts, and the impression of fonts. By focusing on local shapes instead of
the whole letter shape, we can realize letter-shape independent and more
general analysis. The analysis is performed by newly combining SIFT and
DeepSets, to extract an arbitrary number of essential parts from a particular
font and aggregate them to infer the font impressions by nonlinear regression.
Our qualitative and quantitative analyses prove that (1)fonts with similar
parts have similar impressions, (2)many impressions, such as legible and rough,
largely depend on specific parts, (3)several impressions are very irrelevant to
parts.
Related papers
- VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models [53.59400446543756]
We introduce a dual-branch and training-free method, namely VitaGlyph, to enable flexible artistic typography.
VitaGlyph treats input character as a scene composed of Subject and Surrounding, followed by rendering them under varying degrees of geometry transformation.
Experimental results demonstrate that VitaGlyph not only achieves better artistry and readability, but also manages to depict multiple customize concepts.
arXiv Detail & Related papers (2024-10-02T16:48:47Z) - Impression-CLIP: Contrastive Shape-Impression Embedding for Fonts [7.542892664684078]
We propose Impression-CLIP, which is a novel machine-learning model based on CLIP (Contrastive Language-Image Pre-training)
In our experiment, we perform cross-modal retrieval between fonts and impressions through co-embedding.
The results indicate that Impression-CLIP achieves better retrieval accuracy than the state-of-the-art method.
arXiv Detail & Related papers (2024-02-26T07:07:18Z) - Font Impression Estimation in the Wild [7.542892664684078]
We use a font dataset with annotation about font impressions and a convolutional neural network (CNN) framework for this task.
We propose an exemplar-based impression estimation approach, which relies on a strategy of ensembling impressions of exemplar fonts that are similar to the input image.
We conduct a correlation analysis between book genres and font impressions on real book cover images.
arXiv Detail & Related papers (2024-02-23T10:00:25Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - Analyzing Font Style Usage and Contextual Factors in Real Images [12.387676601792899]
This paper analyzes the relationship between font styles and contextual factors that might affect font style selection with large-scale datasets.
We will analyze the relationship between font style and its surrounding object (such as bus'') by using about 800,000 words in the Open Images dataset.
arXiv Detail & Related papers (2023-06-21T06:43:22Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Font Shape-to-Impression Translation [15.228202509283248]
This paper tackles part-based shape-impression analysis based on the Transformer architecture.
It is able to handle the correlation among local parts by its self-attention mechanism.
arXiv Detail & Related papers (2022-03-11T09:02:25Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z) - Shared Latent Space of Font Shapes and Impressions [9.205278113241473]
We realize a shared latent space where a font shape image and its impression words are embedded in a cross-modal manner.
This latent space is useful to understand the style-impression correlation and generate font images by specifying several impression words.
arXiv Detail & Related papers (2021-03-23T06:54:45Z) - Let Me Choose: From Verbal Context to Font Selection [50.293897197235296]
We aim to learn associations between visual attributes of fonts and the verbal context of the texts they are typically applied to.
We introduce a new dataset, containing examples of different topics in social media posts and ads, labeled through crowd-sourcing.
arXiv Detail & Related papers (2020-05-03T17:36:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.