Learning to Generate Poetic Chinese Landscape Painting with Calligraphy
- URL: http://arxiv.org/abs/2305.04719v1
- Date: Mon, 8 May 2023 14:10:10 GMT
- Title: Learning to Generate Poetic Chinese Landscape Painting with Calligraphy
- Authors: Shaozu Yuan, Aijun Dai, Zhiling Yan, Ruixue Liu, Meng Chen, Baoyang
Chen, Zhijie Qiu, Xiaodong He
- Abstract summary: Polaca is a novel system to generate poetic Chinese landscape painting with calligraphy.
It is equipped with three different modules to complete the whole piece of landscape painting artwork.
- Score: 15.33820176664941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel system (denoted as Polaca) to generate
poetic Chinese landscape painting with calligraphy. Unlike previous single
image-to-image painting generation, Polaca takes the classic poetry as input
and outputs the artistic landscape painting image with the corresponding
calligraphy. It is equipped with three different modules to complete the whole
piece of landscape painting artwork: the first one is a text-to-image module to
generate landscape painting image, the second one is an image-to-image module
to generate stylistic calligraphy image, and the third one is an image fusion
module to fuse the two images into a whole piece of aesthetic artwork.
Related papers
- LPGen: Enhancing High-Fidelity Landscape Painting Generation through Diffusion Model [1.7966001353008776]
This paper presents LPGen, a high-fidelity, controllable model for landscape painting generation.
We introduce a novel multi-modal framework that integrates image prompts into the diffusion model.
We implement a decoupled cross-attention strategy to ensure compatibility between image and text prompts, facilitating multi-modal image generation.
arXiv Detail & Related papers (2024-07-24T12:32:24Z) - DLP-GAN: learning to draw modern Chinese landscape photos with
generative adversarial network [20.74857981451259]
Chinese landscape painting has a unique and artistic style, and its drawing technique is highly abstract in both the use of color and the realistic representation of objects.
Previous methods focus on transferring from modern photos to ancient ink paintings, but little attention has been paid to translating landscape paintings into modern photos.
arXiv Detail & Related papers (2024-03-06T04:46:03Z) - Space Narrative: Generating Images and 3D Scenes of Chinese Garden from
Text using Deep Learning [0.0]
We propose a method to generate garden paintings based on text descriptions using deep learning method.
Our image-text pair dataset consists of more than one thousand Ming Dynasty Garden paintings and their inscriptions and post-scripts.
A latent text-to-image diffusion model learns the mapping from de-scriptive texts to garden paintings of the Ming Dynasty, and then the text description of Jichang Garden guides the model to generate new garden paintings.
arXiv Detail & Related papers (2023-11-01T07:16:01Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - TextPainter: Multimodal Text Image Generation with Visual-harmony and
Text-comprehension for Poster Design [50.8682912032406]
This study introduces TextPainter, a novel multimodal approach to generate text images.
TextPainter takes the global-local background image as a hint of style and guides the text image generation with visual harmony.
We construct the PosterT80K dataset, consisting of about 80K posters annotated with sentence-level bounding boxes and text contents.
arXiv Detail & Related papers (2023-08-09T06:59:29Z) - Text-Guided Synthesis of Eulerian Cinemagraphs [81.20353774053768]
We introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions.
We focus on cinemagraphs of fluid elements, such as flowing rivers, and drifting clouds, which exhibit continuous motion and repetitive textures.
arXiv Detail & Related papers (2023-07-06T17:59:31Z) - CCLAP: Controllable Chinese Landscape Painting Generation via Latent
Diffusion Model [54.74470985388726]
controllable Chinese landscape painting generation method named CCLAP.
Our method achieves state-of-the-art performance, especially in artfully-composed and artistic conception.
arXiv Detail & Related papers (2023-04-09T04:16:28Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - GenText: Unsupervised Artistic Text Generation via Decoupled Font and
Texture Manipulation [30.654807125764965]
We propose a novel approach, namely GenText, to achieve general artistic text style transfer.
Specifically, our work incorporates three different stages, stylization, destylization, and font transfer.
Considering the difficult data acquisition of paired artistic text images, our model is designed under the unsupervised setting.
arXiv Detail & Related papers (2022-07-20T04:42:47Z) - Paint4Poem: A Dataset for Artistic Visualization of Classical Chinese
Poems [20.72849584295798]
We construct a new dataset called Paint4Poem.
Paint4Poem consists of 301 high-quality poem-painting pairs collected manually from an influential modern Chinese artist.
We analyze Paint4Poem regarding poem diversity, painting style, and the semantic relevance between poems and paintings.
arXiv Detail & Related papers (2021-09-23T22:57:16Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.