AdaptiFont: Increasing Individuals' Reading Speed with a Generative Font
Model and Bayesian Optimization
- URL: http://arxiv.org/abs/2104.10741v1
- Date: Wed, 21 Apr 2021 19:56:28 GMT
- Title: AdaptiFont: Increasing Individuals' Reading Speed with a Generative Font
Model and Bayesian Optimization
- Authors: Florian Kadner, Yannik Keller and Constantin A. Rothkopf
- Abstract summary: AdaptiFont is a human-in-the-loop system aimed at interactively increasing readability of text displayed on a monitor.
We generate new true-type-fonts through active learning, render texts with the new font, and measure individual users' reading speed.
The results of a user study show that this adaptive font generation system finds regions in the font space corresponding to high reading speeds, that these fonts significantly increase participants' reading speed, and that the found fonts are significantly different across individual readers.
- Score: 3.480626767752489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital text has become one of the primary ways of exchanging knowledge, but
text needs to be rendered to a screen to be read. We present AdaptiFont, a
human-in-the-loop system that is aimed at interactively increasing readability
of text displayed on a monitor. To this end, we first learn a generative font
space with non-negative matrix factorization from a set of classic fonts. In
this space we generate new true-type-fonts through active learning, render
texts with the new font, and measure individual users' reading speed. Bayesian
optimization sequentially generates new fonts on the fly to progressively
increase individuals' reading speed. The results of a user study show that this
adaptive font generation system finds regions in the font space corresponding
to high reading speeds, that these fonts significantly increase participants'
reading speed, and that the found fonts are significantly different across
individual readers.
Related papers
- VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - Combining OCR Models for Reading Early Modern Printed Books [2.839401411131008]
We study the usage of fine-grained font recognition on OCR for books printed from the 15th to the 18th century.
We show that OCR performance is strongly impacted by font style and that selecting fine-tuned models with font group recognition has a very positive impact on the results.
arXiv Detail & Related papers (2023-05-11T20:43:50Z) - GlyphDiffusion: Text Generation as Image Generation [100.98428068214736]
We propose GlyphDiffusion, a novel diffusion approach for text generation via text-guided image generation.
Our key idea is to render the target text as a glyph image containing visual language content.
Our model also makes significant improvements compared to the recent diffusion model.
arXiv Detail & Related papers (2023-04-25T02:14:44Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Font Representation Learning via Paired-glyph Matching [15.358456947574913]
We propose a novel font representation learning scheme to embed font styles into the latent space.
For the discriminative representation of a font from others, we propose a paired-glyph matching-based font representation learning model.
We show our font representation learning scheme achieves better generalization performance than the existing font representation learning techniques.
arXiv Detail & Related papers (2022-11-20T12:27:27Z) - Few-Shot Font Generation by Learning Fine-Grained Local Styles [90.39288370855115]
Few-shot font generation (FFG) aims to generate a new font with a few examples.
We propose a new font generation approach by learning 1) the fine-grained local styles from references, and 2) the spatial correspondence between the content and reference glyphs.
arXiv Detail & Related papers (2022-05-20T05:07:05Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Learning Perceptual Manifold of Fonts [7.395615703126767]
We propose the perceptual manifold of fonts to visualize the perceptual adjustment in the latent space of a generative model of fonts.
In contrast to the conventional user interface in our study, the proposed font-exploring user interface is efficient and helpful in the designated user preference.
arXiv Detail & Related papers (2021-06-17T01:22:52Z) - FONTNET: On-Device Font Understanding and Prediction Pipeline [1.5749416770494706]
We propose two engines: Font Detection Engine and Font Prediction Engine.
We develop a novel CNN architecture for identifying font style of text in images.
Second, we designed a novel algorithm for predicting similar fonts for a given query font.
Third, we have optimized and deployed the entire engine On-Device which ensures privacy and improves latency in real time applications such as instant messaging.
arXiv Detail & Related papers (2021-03-30T08:11:24Z) - Few-shot Compositional Font Generation with Dual Memory [16.967987801167514]
We propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font)
We employ memory components and global-context awareness in the generator to take advantage of the compositionality.
In the experiments on Korean-handwriting fonts and Thai-printing fonts, we observe that our method generates a significantly better quality of samples with faithful stylization.
arXiv Detail & Related papers (2020-05-21T08:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.