DualVector: Unsupervised Vector Font Synthesis with Dual-Part
Representation
- URL: http://arxiv.org/abs/2305.10462v1
- Date: Wed, 17 May 2023 08:18:06 GMT
- Title: DualVector: Unsupervised Vector Font Synthesis with Dual-Part
Representation
- Authors: Ying-Tian Liu, Zhifei Zhang, Yuan-Chen Guo, Matthew Fisher, Zhaowen
Wang, Song-Hai Zhang
- Abstract summary: Current font synthesis methods fail to represent the shape concisely or require vector supervision during training.
We propose a novel dual-part representation for vector glyphs, where each glyph is modeled as a collection of closed "positive" and "negative" path pairs.
Our method, named Dual-of-Font-art, outperforms state-of-the-art methods for practical use.
- Score: 43.64428946288288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic generation of fonts can be an important aid to typeface design.
Many current approaches regard glyphs as pixelated images, which present
artifacts when scaling and inevitable quality losses after vectorization. On
the other hand, existing vector font synthesis methods either fail to represent
the shape concisely or require vector supervision during training. To push the
quality of vector font synthesis to the next level, we propose a novel
dual-part representation for vector glyphs, where each glyph is modeled as a
collection of closed "positive" and "negative" path pairs. The glyph contour is
then obtained by boolean operations on these paths. We first learn such a
representation only from glyph images and devise a subsequent contour
refinement step to align the contour with an image representation to further
enhance details. Our method, named DualVector, outperforms state-of-the-art
methods in vector font synthesis both quantitatively and qualitatively. Our
synthesized vector fonts can be easily converted to common digital font formats
like TrueType Font for practical use. The code is released at
https://github.com/thuliu-yt16/dualvector.
Related papers
- HFH-Font: Few-shot Chinese Font Synthesis with Higher Quality, Faster Speed, and Higher Resolution [17.977410216055024]
We introduce HFH-Font, a few-shot font synthesis method capable of efficiently generating high-resolution glyph images.
For the first time, large-scale Chinese vector fonts of a quality comparable to those manually created by professional font designers can be automatically generated.
arXiv Detail & Related papers (2024-10-09T02:30:24Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - VecFontSDF: Learning to Reconstruct and Synthesize High-quality Vector
Fonts via Signed Distance Functions [15.47282857047361]
This paper proposes an end-to-end trainable method, VecFontSDF, to reconstruct and synthesize high-quality vector fonts.
Based on the proposed SDF-based implicit shape representation, VecFontSDF learns to model each glyph as shape primitives enclosed by several parabolic curves.
arXiv Detail & Related papers (2023-03-22T16:14:39Z) - XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font
Generation [13.569449355929574]
We propose a self-supervised cross-modality pre-training strategy and a cross-modality transformer-based encoder.
The encoder is conditioned jointly on the glyph image and the corresponding stroke labels.
It only requires one reference glyph and achieves the lowest rate of bad cases in the few-shot font generation task 28% lower than the second best.
arXiv Detail & Related papers (2022-04-11T13:34:40Z) - DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality
Learning [21.123297001902177]
We propose a novel method, DeepVecFont, to generate visually-pleasing vector glyphs.
The highlights of this paper are threefold. First, we design a dual-modality learning strategy which utilizes both image-aspect and sequence-aspect features of fonts to synthesize vector glyphs.
Second, we provide a new generative paradigm to handle unstructured data (e.g., vector glyphs) by randomly sampling plausible results to get the optimal one which is further refined under the guidance of generated structured data.
arXiv Detail & Related papers (2021-10-13T12:57:19Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - Font Completion and Manipulation by Cycling Between Multi-Modality
Representations [113.26243126754704]
We innovate to explore the generation of font glyphs as 2D graphic objects with the graph as an intermediate representation.
We formulate a cross-modality cycled image-to-image structure with a graph between an image encoder and an image.
Our model generates improved results than both image-to-image baseline and previous state-of-the-art methods for glyph completion.
arXiv Detail & Related papers (2021-08-30T02:43:29Z) - Learning Implicit Glyph Shape Representation [6.413829791927052]
We present a novel implicit glyph shape representation, which glyphs as shape primitives enclosed quadratic curves, and naturally enables generating glyph images at arbitrary high resolutions.
Based on the proposed representation, we design a simple yet effective disentangled network for the challenging one-shot font style transfer problem.
arXiv Detail & Related papers (2021-06-16T06:42:55Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.