Art Creation with Multi-Conditional StyleGANs
- URL: http://arxiv.org/abs/2202.11777v1
- Date: Wed, 23 Feb 2022 20:45:41 GMT
- Title: Art Creation with Multi-Conditional StyleGANs
- Authors: Konstantin Dobler, Florian H\"ubscher, Jan Westphal, Alejandro
Sierra-M\'unera, Gerard de Melo, Ralf Krestel
- Abstract summary: A human artist needs a combination of unique skills, understanding, and genuine intention to create artworks that evoke deep feelings and emotions.
We introduce a multi-conditional Generative Adversarial Network (GAN) approach trained on large amounts of human paintings to synthesize realistic-looking paintings that emulate human art.
- Score: 81.72047414190482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating meaningful art is often viewed as a uniquely human endeavor. A human
artist needs a combination of unique skills, understanding, and genuine
intention to create artworks that evoke deep feelings and emotions. In this
paper, we introduce a multi-conditional Generative Adversarial Network (GAN)
approach trained on large amounts of human paintings to synthesize
realistic-looking paintings that emulate human art. Our approach is based on
the StyleGAN neural network architecture, but incorporates a custom
multi-conditional control mechanism that provides fine-granular control over
characteristics of the generated paintings, e.g., with regard to the perceived
emotion evoked in a spectator. For better control, we introduce the conditional
truncation trick, which adapts the standard truncation trick for the
conditional setting and diverse datasets. Finally, we develop a diverse set of
evaluation techniques tailored to multi-conditional generation.
Related papers
- Neural-Polyptych: Content Controllable Painting Recreation for Diverse Genres [30.83874057768352]
We present a unified framework, Neural-Polyptych, to facilitate the creation of expansive, high-resolution paintings.
We have designed a multi-scale GAN-based architecture to decompose the generation process into two parts.
We validate our approach to diverse genres of both Eastern and Western paintings.
arXiv Detail & Related papers (2024-09-29T12:46:00Z) - AI Art Neural Constellation: Revealing the Collective and Contrastive
State of AI-Generated and Human Art [36.21731898719347]
We conduct a comprehensive analysis to position AI-generated art within the context of human art heritage.
Our comparative analysis is based on an extensive dataset, dubbed ArtConstellation''
Key finding is that AI-generated artworks are visually related to the principle concepts for modern period art made in 1800-2000.
arXiv Detail & Related papers (2024-02-04T11:49:51Z) - CreativeSynth: Creative Blending and Synthesis of Visual Arts based on
Multimodal Diffusion [74.44273919041912]
Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.
However, adapting these models for artistic image editing presents two significant challenges.
We build the innovative unified framework Creative Synth, which is based on a diffusion model with the ability to coordinate multimodal inputs.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - XAGen: 3D Expressive Human Avatars Generation [76.69560679209171]
XAGen is the first 3D generative model for human avatars capable of expressive control over body, face, and hands.
We propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands.
Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities.
arXiv Detail & Related papers (2023-11-22T18:30:42Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - LiveStyle -- An Application to Transfer Artistic Styles [0.0]
Style Transfer using Neural Networks refers to optimization techniques, where a content image and a style image are taken and blended.
This paper implements the Style Transfer using three different Neural Networks in form of an application that is accessible to the general population.
arXiv Detail & Related papers (2021-05-03T13:50:48Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - Anisotropic Stroke Control for Multiple Artists Style Transfer [36.92721585146738]
Stroke Control Multi-Artist Style Transfer framework is developed.
Anisotropic Stroke Module (ASM) endows the network with the ability of adaptive semantic-consistency among various styles.
In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue.
arXiv Detail & Related papers (2020-10-16T05:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.