Calliffusion: Chinese Calligraphy Generation and Style Transfer with
Diffusion Modeling
- URL: http://arxiv.org/abs/2305.19124v1
- Date: Tue, 30 May 2023 15:34:45 GMT
- Title: Calliffusion: Chinese Calligraphy Generation and Style Transfer with
Diffusion Modeling
- Authors: Qisheng Liao, Gus Xia, Zhinuo Wang
- Abstract summary: We propose Calliffusion, a system for generating high-quality Chinese calligraphy using diffusion models.
Our model architecture is based on DDPM (Denoising Diffusion Probabilistic Models)
- Score: 1.856334276134661
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we propose Calliffusion, a system for generating high-quality
Chinese calligraphy using diffusion models. Our model architecture is based on
DDPM (Denoising Diffusion Probabilistic Models), and it is capable of
generating common characters in five different scripts and mimicking the styles
of famous calligraphers. Experiments demonstrate that our model can generate
calligraphy that is difficult to distinguish from real artworks and that our
controls for characters, scripts, and styles are effective. Moreover, we
demonstrate one-shot transfer learning, using LoRA (Low-Rank Adaptation) to
transfer Chinese calligraphy art styles to unseen characters and even
out-of-domain symbols such as English letters and digits.
Related papers
- Moyun: A Diffusion-Based Model for Style-Specific Chinese Calligraphy Generation [10.7430517947254]
'Moyun' can effectively control the generation process and produce calligraphy in the specified style.
Even for calligraphy the calligrapher has not written, 'Moyun' can generate calligraphy that matches the style of the calligrapher.
arXiv Detail & Related papers (2024-10-10T05:14:03Z) - Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with Pre-trained Image Encoder [57.574544285878794]
Ada-Adapter is a novel framework for few-shot style personalization of diffusion models.
Our method enables efficient zero-shot style transfer utilizing a single reference image.
We demonstrate the effectiveness of our approach on various artistic styles, including flat art, 3D rendering, and logo design.
arXiv Detail & Related papers (2024-07-08T02:00:17Z) - CalliPaint: Chinese Calligraphy Inpainting with Diffusion Model [17.857394263321538]
We introduce a new model that harnesses recent advancements in both Chinese calligraphy generation and image inpainting.
We demonstrate that our proposed model CalliPaint can produce convincing Chinese calligraphy.
arXiv Detail & Related papers (2023-12-03T23:29:59Z) - Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners [88.07317175639226]
We propose a novel approach, Discriminative Stable Diffusion (DSD), which turns pre-trained text-to-image diffusion models into few-shot discriminative learners.
Our approach mainly uses the cross-attention score of a Stable Diffusion model to capture the mutual influence between visual and textual information.
arXiv Detail & Related papers (2023-05-18T05:41:36Z) - DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion [10.75789076591325]
We introduce a novel method to automatically generate an artistic typography by stylizing one or more letter fonts.
Our approach utilizes large language models to bridge texts and visual images for stylization and build an unsupervised generative model.
arXiv Detail & Related papers (2023-03-16T19:12:52Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - Few-shot Font Generation by Learning Style Difference and Similarity [84.76381937516356]
We propose a novel font generation approach by learning the Difference between different styles and the Similarity of the same style (DS-Font)
Specifically, we propose a multi-layer style projector for style encoding and realize a distinctive style representation via our proposed Cluster-level Contrastive Style (CCS) loss.
arXiv Detail & Related papers (2023-01-24T13:57:25Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - ShufaNet: Classification method for calligraphers who have reached the
professional level [0.0]
We propose a novel method, ShufaNet, to classify Chinese calligraphers' styles based on metric learning.
Our method achieved 65% accuracy rate in our data set for few-shot learning, surpassing resNet and other mainstream CNNs.
arXiv Detail & Related papers (2021-11-22T16:55:31Z) - Scalable Font Reconstruction with Dual Latent Manifolds [55.29525824849242]
We propose a deep generative model that performs typography analysis and font reconstruction.
Our approach enables us to massively scale up the number of character types we can effectively model.
We evaluate on the task of font reconstruction over various datasets representing character types of many languages.
arXiv Detail & Related papers (2021-09-10T20:37:43Z) - CalliGAN: Style and Structure-aware Chinese Calligraphy Character
Generator [6.440233787863018]
Chinese calligraphy is the writing of Chinese characters as an art form performed with brushes.
Recent studies show that Chinese characters can be generated through image-to-image translation for multiple styles using a single model.
We propose a novel method of this approach by incorporating Chinese characters' component information into its model.
arXiv Detail & Related papers (2020-05-26T03:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.