PaCaNet: A Study on CycleGAN with Transfer Learning for Diversifying
Fused Chinese Painting and Calligraphy
- URL: http://arxiv.org/abs/2301.13082v5
- Date: Sun, 21 May 2023 13:46:57 GMT
- Title: PaCaNet: A Study on CycleGAN with Transfer Learning for Diversifying
Fused Chinese Painting and Calligraphy
- Authors: Zuhao Yang, Huajun Bai, Zhang Luo, Yang Xu, Wei Pang, Yue Wang,
Yisheng Yuan, Yingfang Yuan
- Abstract summary: PaCaNet is a CycleGAN-based pipeline for producing novel artworks that fuse two different art types, traditional Chinese painting and calligraphy.
Our approach creates a unique aesthetic experience rooted in the origination of Chinese hieroglyph characters.
- Score: 8.724826658340415
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AI-Generated Content (AIGC) has recently gained a surge in popularity,
powered by its high efficiency and consistency in production, and its
capability of being customized and diversified. The cross-modality nature of
the representation learning mechanism in most AIGC technology allows for more
freedom and flexibility in exploring new types of art that would be impossible
in the past. Inspired by the pictogram subset of Chinese characters, we
proposed PaCaNet, a CycleGAN-based pipeline for producing novel artworks that
fuse two different art types, traditional Chinese painting and calligraphy. In
an effort to produce stable and diversified output, we adopted three main
technical innovations: 1. Using one-shot learning to increase the creativity of
pre-trained models and diversify the content of the fused images. 2.
Controlling the preference over generated Chinese calligraphy by freezing
randomly sampled parameters in pre-trained models. 3. Using a regularization
method to encourage the models to produce images similar to Chinese paintings.
Furthermore, we conducted a systematic study to explore the performance of
PaCaNet in diversifying fused Chinese painting and calligraphy, which showed
satisfying results. In conclusion, we provide a new direction of creating arts
by fusing the visual information in paintings and the stroke features in
Chinese calligraphy. Our approach creates a unique aesthetic experience rooted
in the origination of Chinese hieroglyph characters. It is also a unique
opportunity to delve deeper into traditional artwork and, in doing so, to
create a meaningful impact on preserving and revitalizing traditional heritage.
Related papers
- Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training [68.41837295318152]
Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with visual texts.
Existing backbone models have limitations such as misspelling, failing to generate texts, and lack of support for Chinese text.
We propose a series of methods, aiming to empower backbone models to generate visual texts in English and Chinese.
arXiv Detail & Related papers (2024-10-06T10:25:39Z) - Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with Pre-trained Image Encoder [57.574544285878794]
Ada-Adapter is a novel framework for few-shot style personalization of diffusion models.
Our method enables efficient zero-shot style transfer utilizing a single reference image.
We demonstrate the effectiveness of our approach on various artistic styles, including flat art, 3D rendering, and logo design.
arXiv Detail & Related papers (2024-07-08T02:00:17Z) - Few-shot Calligraphy Style Learning [0.0]
"Presidifussion" is a novel approach to learning and replicating the unique style of calligraphy of President Xu.
We introduce innovative techniques of font image conditioning and stroke information conditioning, enabling the model to capture the intricate structural elements of Chinese characters.
This work not only presents a breakthrough in the digital preservation of calligraphic art but also sets a new standard for data-efficient generative modeling in the domain of cultural heritage digitization.
arXiv Detail & Related papers (2024-04-26T07:17:09Z) - DLP-GAN: learning to draw modern Chinese landscape photos with
generative adversarial network [20.74857981451259]
Chinese landscape painting has a unique and artistic style, and its drawing technique is highly abstract in both the use of color and the realistic representation of objects.
Previous methods focus on transferring from modern photos to ancient ink paintings, but little attention has been paid to translating landscape paintings into modern photos.
arXiv Detail & Related papers (2024-03-06T04:46:03Z) - CalliPaint: Chinese Calligraphy Inpainting with Diffusion Model [17.857394263321538]
We introduce a new model that harnesses recent advancements in both Chinese calligraphy generation and image inpainting.
We demonstrate that our proposed model CalliPaint can produce convincing Chinese calligraphy.
arXiv Detail & Related papers (2023-12-03T23:29:59Z) - Interactive Neural Painting [66.9376011879115]
This paper proposes the first approach for Interactive Neural Painting (NP)
We propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder.
Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art.
arXiv Detail & Related papers (2023-07-31T07:02:00Z) - Calliffusion: Chinese Calligraphy Generation and Style Transfer with
Diffusion Modeling [1.856334276134661]
We propose Calliffusion, a system for generating high-quality Chinese calligraphy using diffusion models.
Our model architecture is based on DDPM (Denoising Diffusion Probabilistic Models)
arXiv Detail & Related papers (2023-05-30T15:34:45Z) - StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity
3D Avatar Generation [103.88928334431786]
We present a novel method for generating high-quality, stylized 3D avatars.
We use pre-trained image-text diffusion models for data generation and a Generative Adversarial Network (GAN)-based 3D generation network for training.
Our approach demonstrates superior performance over current state-of-the-art methods in terms of visual quality and diversity of the produced avatars.
arXiv Detail & Related papers (2023-05-30T13:09:21Z) - CCLAP: Controllable Chinese Landscape Painting Generation via Latent
Diffusion Model [54.74470985388726]
controllable Chinese landscape painting generation method named CCLAP.
Our method achieves state-of-the-art performance, especially in artfully-composed and artistic conception.
arXiv Detail & Related papers (2023-04-09T04:16:28Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - A Framework and Dataset for Abstract Art Generation via CalligraphyGAN [0.0]
We present a creative framework based on Conditional Generative Adversarial Networks and Contextual Neural Language Model to generate abstract artworks.
Our work is inspired by Chinese calligraphy, which is a unique form of visual art where the character itself is an aesthetic painting.
arXiv Detail & Related papers (2020-12-02T16:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.