Generative Art Using Neural Visual Grammars and Dual Encoders
- URL: http://arxiv.org/abs/2105.00162v2
- Date: Tue, 4 May 2021 01:34:46 GMT
- Title: Generative Art Using Neural Visual Grammars and Dual Encoders
- Authors: Chrisantha Fernando, S. M. Ali Eslami, Jean-Baptiste Alayrac, Piotr
Mirowski, Dylan Banarse, Simon Osindero
- Abstract summary: A novel algorithm for producing generative art is described.
It allows a user to input a text string, and which in a creative response to this string, outputs an image.
- Score: 25.100664361601112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whilst there are perhaps only a few scientific methods, there seem to be
almost as many artistic methods as there are artists. Artistic processes appear
to inhabit the highest order of open-endedness. To begin to understand some of
the processes of art making it is helpful to try to automate them even
partially. In this paper, a novel algorithm for producing generative art is
described which allows a user to input a text string, and which in a creative
response to this string, outputs an image which interprets that string. It does
so by evolving images using a hierarchical neural Lindenmeyer system, and
evaluating these images along the way using an image text dual encoder trained
on billions of images and their associated text from the internet. In doing so
we have access to and control over an instance of an artistic process, allowing
analysis of which aspects of the artistic process become the task of the
algorithm, and which elements remain the responsibility of the artist.
Related papers
- Learning to Synthesize Graphics Programs for Geometric Artworks [12.82009632507056]
We present an approach that treats a set of drawing tools as executable programs.
This method predicts a sequence of steps to achieve the final image.
Experiments demonstrate that our program synthesizer, Art2Prog, can comprehensively understand complex input images.
arXiv Detail & Related papers (2024-10-21T08:28:11Z) - ProcessPainter: Learn Painting Process from Sequence Data [27.9875429986135]
The painting process of artists is inherently stepwise and varies significantly among different painters and styles.
Traditional stroke-based rendering methods break down images into sequences of brushstrokes, yet they fall short of replicating the authentic processes of artists.
We introduce ProcessPainter, a text-to-video model that is initially pre-trained on synthetic data and subsequently fine-tuned with a select set of artists' painting sequences.
arXiv Detail & Related papers (2024-06-10T07:18:41Z) - Interactive Neural Painting [66.9376011879115]
This paper proposes the first approach for Interactive Neural Painting (NP)
We propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder.
Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art.
arXiv Detail & Related papers (2023-07-31T07:02:00Z) - Text-Guided Synthesis of Eulerian Cinemagraphs [81.20353774053768]
We introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions.
We focus on cinemagraphs of fluid elements, such as flowing rivers, and drifting clouds, which exhibit continuous motion and repetitive textures.
arXiv Detail & Related papers (2023-07-06T17:59:31Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Text2Human: Text-Driven Controllable Human Image Generation [98.34326708923284]
Existing generative models often fall short under the high diversity of clothing shapes and textures.
We present a text-driven controllable framework, Text2Human, for a high-quality and diverse human generation.
arXiv Detail & Related papers (2022-05-31T17:57:06Z) - Text to artistic image generation [0.0]
We create an end-to-end solution that can generate artistic images from text descriptions.
Due to the lack of datasets with paired text description and artistic images, it is hard to directly train an algorithm which can create art based on text input.
arXiv Detail & Related papers (2022-05-05T04:44:56Z) - Toward Modeling Creative Processes for Algorithmic Painting [12.602935529346063]
The paper argues that creative processes often involve two important components: vague, high-level goals and exploratory processes for discovering new ideas.
This paper sketches out possible computational mechanisms for imitating those elements of the painting process, including underspecified loss functions and iterative painting procedures with explicit task decompositions.
arXiv Detail & Related papers (2022-05-03T16:33:45Z) - State of the Art on Neural Rendering [141.22760314536438]
We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs.
This report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence.
arXiv Detail & Related papers (2020-04-08T04:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.