VecFontSDF: Learning to Reconstruct and Synthesize High-quality Vector
Fonts via Signed Distance Functions
- URL: http://arxiv.org/abs/2303.12675v1
- Date: Wed, 22 Mar 2023 16:14:39 GMT
- Title: VecFontSDF: Learning to Reconstruct and Synthesize High-quality Vector
Fonts via Signed Distance Functions
- Authors: Zeqing Xia, Bojun Xiong, Zhouhui Lian
- Abstract summary: This paper proposes an end-to-end trainable method, VecFontSDF, to reconstruct and synthesize high-quality vector fonts.
Based on the proposed SDF-based implicit shape representation, VecFontSDF learns to model each glyph as shape primitives enclosed by several parabolic curves.
- Score: 15.47282857047361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Font design is of vital importance in the digital content design and modern
printing industry. Developing algorithms capable of automatically synthesizing
vector fonts can significantly facilitate the font design process. However,
existing methods mainly concentrate on raster image generation, and only a few
approaches can directly synthesize vector fonts. This paper proposes an
end-to-end trainable method, VecFontSDF, to reconstruct and synthesize
high-quality vector fonts using signed distance functions (SDFs). Specifically,
based on the proposed SDF-based implicit shape representation, VecFontSDF
learns to model each glyph as shape primitives enclosed by several parabolic
curves, which can be precisely converted to quadratic B\'ezier curves that are
widely used in vector font products. In this manner, most image generation
methods can be easily extended to synthesize vector fonts. Qualitative and
quantitative experiments conducted on a publicly-available dataset demonstrate
that our method obtains high-quality results on several tasks, including vector
font reconstruction, interpolation, and few-shot vector font synthesis,
markedly outperforming the state of the art.
Related papers
- HFH-Font: Few-shot Chinese Font Synthesis with Higher Quality, Faster Speed, and Higher Resolution [17.977410216055024]
We introduce HFH-Font, a few-shot font synthesis method capable of efficiently generating high-resolution glyph images.
For the first time, large-scale Chinese vector fonts of a quality comparable to those manually created by professional font designers can be automatically generated.
arXiv Detail & Related papers (2024-10-09T02:30:24Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - DualVector: Unsupervised Vector Font Synthesis with Dual-Part
Representation [43.64428946288288]
Current font synthesis methods fail to represent the shape concisely or require vector supervision during training.
We propose a novel dual-part representation for vector glyphs, where each glyph is modeled as a collection of closed "positive" and "negative" path pairs.
Our method, named Dual-of-Font-art, outperforms state-of-the-art methods for practical use.
arXiv Detail & Related papers (2023-05-17T08:18:06Z) - DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with
Higher Quality [38.32966391626858]
This paper proposes an enhanced version of DeepVecFont for vector font synthesis.
We adopt Transformers instead of RNNs to process sequential data and design a relaxation representation for vector outlines.
We also propose to sample auxiliary points in addition to control points to precisely align the generated and target B'ezier curves or lines.
arXiv Detail & Related papers (2023-03-25T23:28:19Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality
Learning [21.123297001902177]
We propose a novel method, DeepVecFont, to generate visually-pleasing vector glyphs.
The highlights of this paper are threefold. First, we design a dual-modality learning strategy which utilizes both image-aspect and sequence-aspect features of fonts to synthesize vector glyphs.
Second, we provide a new generative paradigm to handle unstructured data (e.g., vector glyphs) by randomly sampling plausible results to get the optimal one which is further refined under the guidance of generated structured data.
arXiv Detail & Related papers (2021-10-13T12:57:19Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - A Multi-Implicit Neural Representation for Fonts [79.6123184198301]
font-specific discontinuities like edges and corners are difficult to represent using neural networks.
We introduce textitmulti-implicits to represent fonts as a permutation-in set of learned implict functions, without losing features.
arXiv Detail & Related papers (2021-06-12T21:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.