Shared Latent Space of Font Shapes and Impressions
- URL: http://arxiv.org/abs/2103.12347v1
- Date: Tue, 23 Mar 2021 06:54:45 GMT
- Title: Shared Latent Space of Font Shapes and Impressions
- Authors: Jihun Kang, Daichi Haraguchi, Akisato Kimura, Seiichi Uchida
- Abstract summary: We realize a shared latent space where a font shape image and its impression words are embedded in a cross-modal manner.
This latent space is useful to understand the style-impression correlation and generate font images by specifying several impression words.
- Score: 9.205278113241473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We have specific impressions from the style of a typeface (font), suggesting
that there are correlations between font shape and its impressions. Based on
this hypothesis, we realize a shared latent space where a font shape image and
its impression words are embedded in a cross-modal manner. This latent space is
useful to understand the style-impression correlation and generate font images
by specifying several impression words. Experimental results with a large
style-impression dataset prove that it is possible to accurately realize the
shared latent space, especially for shape-relevant impression words, and then
use the space to generate font images with various impressions.
Related papers
- StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples [48.44036251656947]
Style representations aim to embed texts with similar writing styles closely and texts with different styles far apart, regardless of content.
We introduce StyleDistance, a novel approach to training stronger content-independent style embeddings.
arXiv Detail & Related papers (2024-10-16T17:25:25Z) - GRIF-DM: Generation of Rich Impression Fonts using Diffusion Models [18.15911470339845]
We introduce a diffusion-based method, termed ourmethod, to generate fonts that vividly embody specific impressions.
Our experimental results, conducted on the MyFonts dataset, affirm that this method is capable of producing realistic, vibrant, and high-fidelity fonts.
arXiv Detail & Related papers (2024-08-14T02:26:46Z) - Impression-CLIP: Contrastive Shape-Impression Embedding for Fonts [7.542892664684078]
We propose Impression-CLIP, which is a novel machine-learning model based on CLIP (Contrastive Language-Image Pre-training)
In our experiment, we perform cross-modal retrieval between fonts and impressions through co-embedding.
The results indicate that Impression-CLIP achieves better retrieval accuracy than the state-of-the-art method.
arXiv Detail & Related papers (2024-02-26T07:07:18Z) - Font Impression Estimation in the Wild [7.542892664684078]
We use a font dataset with annotation about font impressions and a convolutional neural network (CNN) framework for this task.
We propose an exemplar-based impression estimation approach, which relies on a strategy of ensembling impressions of exemplar fonts that are similar to the input image.
We conduct a correlation analysis between book genres and font impressions on real book cover images.
arXiv Detail & Related papers (2024-02-23T10:00:25Z) - Analyzing Font Style Usage and Contextual Factors in Real Images [12.387676601792899]
This paper analyzes the relationship between font styles and contextual factors that might affect font style selection with large-scale datasets.
We will analyze the relationship between font style and its surrounding object (such as bus'') by using about 800,000 words in the Open Images dataset.
arXiv Detail & Related papers (2023-06-21T06:43:22Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Which Parts determine the Impression of the Font? [0.0]
Various fonts give different impressions, such as legible, rough, and comic-text.
By focusing on local shapes instead of the whole letter shape, we can realize letter-shape independent and more general analysis.
arXiv Detail & Related papers (2021-03-26T02:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.