Continuation of Famous Art with AI: A Conditional Adversarial Network
Inpainting Approach
- URL: http://arxiv.org/abs/2110.09170v1
- Date: Mon, 18 Oct 2021 10:39:32 GMT
- Title: Continuation of Famous Art with AI: A Conditional Adversarial Network
Inpainting Approach
- Authors: Jordan J. Bird
- Abstract summary: This work explores the application of image inpainting to continue famous artworks and produce generative art with a Conditional GAN.
An inpainting GAN is then tasked with learning to reconstruct the original image from the centre crop by way of minimising both adversarial and absolute difference losses.
Images are resized rather than cropped and presented as input to the generator.
Following the learning process, the generator then creates new images by continuing from the edges of the original piece.
- Score: 1.713291434132985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of the state-of-the-art in image synthesis inspired by real artwork are
either entirely generative by filtered random noise or inspired by the transfer
of style. This work explores the application of image inpainting to continue
famous artworks and produce generative art with a Conditional GAN. During the
training stage of the process, the borders of images are cropped, leaving only
the centre. An inpainting GAN is then tasked with learning to reconstruct the
original image from the centre crop by way of minimising both adversarial and
absolute difference losses. Once the network is trained, images are then
resized rather than cropped and presented as input to the generator. Following
the learning process, the generator then creates new images by continuing from
the edges of the original piece. Three experiments are performed with datasets
of 4766 landscape paintings (impressionism and romanticism), 1167 Ukiyo-e works
from the Japanese Edo period, and 4968 abstract artworks. Results show that
geometry and texture (including canvas and paint) as well as scenery such as
sky, clouds, water, land (including hills and mountains), grass, and flowers
are implemented by the generator when extending real artworks. In the Ukiyo-e
experiments, it was observed that features such as written text were generated
even in cases where the original image did not have any, due to the presence of
an unpainted border within the input image.
Related papers
- Inverse Painting: Reconstructing The Painting Process [24.57538165449989]
We formulate this as an autoregressive image generation problem, in which an initially blank "canvas" is iteratively updated.
The model learns from real artists by training on many painting videos.
arXiv Detail & Related papers (2024-09-30T17:56:52Z) - DLP-GAN: learning to draw modern Chinese landscape photos with
generative adversarial network [20.74857981451259]
Chinese landscape painting has a unique and artistic style, and its drawing technique is highly abstract in both the use of color and the realistic representation of objects.
Previous methods focus on transferring from modern photos to ancient ink paintings, but little attention has been paid to translating landscape paintings into modern photos.
arXiv Detail & Related papers (2024-03-06T04:46:03Z) - Space Narrative: Generating Images and 3D Scenes of Chinese Garden from
Text using Deep Learning [0.0]
We propose a method to generate garden paintings based on text descriptions using deep learning method.
Our image-text pair dataset consists of more than one thousand Ming Dynasty Garden paintings and their inscriptions and post-scripts.
A latent text-to-image diffusion model learns the mapping from de-scriptive texts to garden paintings of the Ming Dynasty, and then the text description of Jichang Garden guides the model to generate new garden paintings.
arXiv Detail & Related papers (2023-11-01T07:16:01Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - Text-Guided Synthesis of Eulerian Cinemagraphs [81.20353774053768]
We introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions.
We focus on cinemagraphs of fluid elements, such as flowing rivers, and drifting clouds, which exhibit continuous motion and repetitive textures.
arXiv Detail & Related papers (2023-07-06T17:59:31Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Paint it Black: Generating paintings from text descriptions [0.0]
Two distinct tasks - generating photorealistic pictures from given text prompts and transferring the style of a painting to a real image to make it appear as though it were done by an artist, have been addressed many times, and several approaches have been proposed to accomplish them.
In this paper, we have explored two distinct strategies and have integrated them together.
First strategy is to generate photorealistic images and then apply style transfer and the second strategy is to train an image generation model on real images with captions and then fine-tune it on captioned paintings later.
arXiv Detail & Related papers (2023-02-17T11:07:53Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Interactive Style Transfer: All is Your Palette [74.06681967115594]
We propose a drawing-like interactive style transfer (IST) method, by which users can interactively create a harmonious-style image.
Our IST method can serve as a brush, dip style from anywhere, and then paint to any region of the target content image.
arXiv Detail & Related papers (2022-03-25T06:38:46Z) - ReGO: Reference-Guided Outpainting for Scenery Image [82.21559299694555]
generative adversarial learning has advanced the image outpainting by producing semantic consistent content for the given image.
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors.
To prevent the style of the generated part from being affected by the reference images, a style ranking loss is proposed to augment the ReGO to synthesize style-consistent results.
arXiv Detail & Related papers (2021-06-20T02:34:55Z) - Learning of Art Style Using AI and Its Evaluation Based on Psychological
Experiments [1.0499611180329802]
GANs (Generative adversarial networks) is a new AI technology that can perform deep learning with less training data.
We have carried out a comparison between several art sets with different art style using GAN.
arXiv Detail & Related papers (2020-05-04T07:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.