CalliPaint: Chinese Calligraphy Inpainting with Diffusion Model
- URL: http://arxiv.org/abs/2312.01536v1
- Date: Sun, 3 Dec 2023 23:29:59 GMT
- Title: CalliPaint: Chinese Calligraphy Inpainting with Diffusion Model
- Authors: Qisheng Liao, Zhinuo Wang, Muhammad Abdul-Mageed, Gus Xia
- Abstract summary: We introduce a new model that harnesses recent advancements in both Chinese calligraphy generation and image inpainting.
We demonstrate that our proposed model CalliPaint can produce convincing Chinese calligraphy.
- Score: 17.857394263321538
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Chinese calligraphy can be viewed as a unique form of visual art. Recent
advancements in computer vision hold significant potential for the future
development of generative models in the realm of Chinese calligraphy.
Nevertheless, methods of Chinese calligraphy inpainting, which can be
effectively used in the art and education fields, remain relatively unexplored.
In this paper, we introduce a new model that harnesses recent advancements in
both Chinese calligraphy generation and image inpainting. We demonstrate that
our proposed model CalliPaint can produce convincing Chinese calligraphy.
Related papers
- Moyun: A Diffusion-Based Model for Style-Specific Chinese Calligraphy Generation [10.7430517947254]
'Moyun' can effectively control the generation process and produce calligraphy in the specified style.
Even for calligraphy the calligrapher has not written, 'Moyun' can generate calligraphy that matches the style of the calligrapher.
arXiv Detail & Related papers (2024-10-10T05:14:03Z) - Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training [68.41837295318152]
Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with visual texts.
Existing backbone models have limitations such as misspelling, failing to generate texts, and lack of support for Chinese text.
We propose a series of methods, aiming to empower backbone models to generate visual texts in English and Chinese.
arXiv Detail & Related papers (2024-10-06T10:25:39Z) - HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models [59.01600111737628]
HD-Painter is a training free approach that accurately follows prompts and coherently scales to high resolution image inpainting.
To this end, we design the Prompt-Aware Introverted Attention (PAIntA) layer enhancing self-attention scores.
Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches quantitatively and qualitatively.
arXiv Detail & Related papers (2023-12-21T18:09:30Z) - Calliffusion: Chinese Calligraphy Generation and Style Transfer with
Diffusion Modeling [1.856334276134661]
We propose Calliffusion, a system for generating high-quality Chinese calligraphy using diffusion models.
Our model architecture is based on DDPM (Denoising Diffusion Probabilistic Models)
arXiv Detail & Related papers (2023-05-30T15:34:45Z) - CCLAP: Controllable Chinese Landscape Painting Generation via Latent
Diffusion Model [54.74470985388726]
controllable Chinese landscape painting generation method named CCLAP.
Our method achieves state-of-the-art performance, especially in artfully-composed and artistic conception.
arXiv Detail & Related papers (2023-04-09T04:16:28Z) - PaCaNet: A Study on CycleGAN with Transfer Learning for Diversifying
Fused Chinese Painting and Calligraphy [8.724826658340415]
PaCaNet is a CycleGAN-based pipeline for producing novel artworks that fuse two different art types, traditional Chinese painting and calligraphy.
Our approach creates a unique aesthetic experience rooted in the origination of Chinese hieroglyph characters.
arXiv Detail & Related papers (2023-01-30T17:22:10Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Language Does More Than Describe: On The Lack Of Figurative Speech in
Text-To-Image Models [63.545146807810305]
Text-to-image diffusion models can generate high-quality pictures from textual input prompts.
These models have been trained using text data collected from content-based labelling protocols.
We characterise the sentimentality, objectiveness and degree of abstraction of publicly available text data used to train current text-to-image diffusion models.
arXiv Detail & Related papers (2022-10-19T14:20:05Z) - Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data [88.78171717494688]
We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
arXiv Detail & Related papers (2022-02-08T06:49:57Z) - ShufaNet: Classification method for calligraphers who have reached the
professional level [0.0]
We propose a novel method, ShufaNet, to classify Chinese calligraphers' styles based on metric learning.
Our method achieved 65% accuracy rate in our data set for few-shot learning, surpassing resNet and other mainstream CNNs.
arXiv Detail & Related papers (2021-11-22T16:55:31Z) - CalliGAN: Style and Structure-aware Chinese Calligraphy Character
Generator [6.440233787863018]
Chinese calligraphy is the writing of Chinese characters as an art form performed with brushes.
Recent studies show that Chinese characters can be generated through image-to-image translation for multiple styles using a single model.
We propose a novel method of this approach by incorporating Chinese characters' component information into its model.
arXiv Detail & Related papers (2020-05-26T03:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.