Between Predictability and Randomness: Seeking Artistic Inspiration from AI Generative Models
- URL: http://arxiv.org/abs/2506.12634v1
- Date: Sat, 14 Jun 2025 21:34:26 GMT
- Title: Between Predictability and Randomness: Seeking Artistic Inspiration from AI Generative Models
- Authors: Olga Vechtomova,
- Abstract summary: This paper explores the use of AI-generated poetic lines as stimuli for creativity.<n>I demonstrate that LSTM-VAE lines achieve their evocative impact through a combination of resonant imagery and productive indeterminacy.
- Score: 6.744385328015561
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artistic inspiration often emerges from language that is open to interpretation. This paper explores the use of AI-generated poetic lines as stimuli for creativity. Through analysis of two generative AI approaches--lines generated by Long Short-Term Memory Variational Autoencoders (LSTM-VAE) and complete poems by Large Language Models (LLMs)--I demonstrate that LSTM-VAE lines achieve their evocative impact through a combination of resonant imagery and productive indeterminacy. While LLMs produce technically accomplished poetry with conventional patterns, LSTM-VAE lines can engage the artist through semantic openness, unconventional combinations, and fragments that resist closure. Through the composition of an original poem, where narrative emerged organically through engagement with LSTM-VAE generated lines rather than following a predetermined structure, I demonstrate how these characteristics can serve as evocative starting points for authentic artistic expression.
Related papers
- PoemTale Diffusion: Minimising Information Loss in Poem to Image Generation with Multi-Stage Prompt Refinement [18.293592213622183]
PoemTale Diffusion aims to minimise the information that is lost during poetic text-to-image conversion.<n>To support this, we adapt existing state-of-the-art diffusion models by modifying their self-attention mechanisms.<n>To encourage research in the field of poetry, we introduce the P4I dataset, consisting of 1111 poems.
arXiv Detail & Related papers (2025-07-18T07:33:08Z) - OmniDRCA: Parallel Speech-Text Foundation Model via Dual-Resolution Speech Representations and Contrastive Alignment [48.17593420058064]
We present OmniDRCA, a parallel speech-text foundation model based on joint autoregressive modeling.<n>Our approach processes speech and text representations parallel while enhancing audio comprehension through contrastive alignment.
arXiv Detail & Related papers (2025-06-11T02:57:22Z) - RePrompt: Reasoning-Augmented Reprompting for Text-to-Image Generation via Reinforcement Learning [88.14234949860105]
RePrompt is a novel reprompting framework that introduces explicit reasoning into the prompt enhancement process via reinforcement learning.<n>Our approach enables end-to-end training without human-annotated data.
arXiv Detail & Related papers (2025-05-23T06:44:26Z) - Compose Your Aesthetics: Empowering Text-to-Image Models with the Principles of Art [61.28133495240179]
We propose a novel task of aesthetics alignment which seeks to align user-specified aesthetics with the T2I generation output.<n>Inspired by how artworks provide an invaluable perspective to approach aesthetics, we codify visual aesthetics using the compositional framework artists employ.<n>We demonstrate that T2I DMs can effectively offer 10 compositional controls through user-specified PoA conditions.
arXiv Detail & Related papers (2025-03-15T06:58:09Z) - Mimetic Poet [6.999740786886536]
This paper presents the design and initial assessment of a novel device that uses generative AI to facilitate creative ideation.
The device allows participants to compose short poetic texts by physically placing words on the device's surface.
Upon composing the text, the system employs a large language model (LLM) to generate a response, displayed on an e-ink screen.
arXiv Detail & Related papers (2024-06-04T02:50:15Z) - Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization [52.935150075484074]
We introduce a well-designed visual tokenizer to translate the non-linguistic image into a sequence of discrete tokens like a foreign language.
The resulting visual tokens encompass high-level semantics worthy of a word and also support dynamic sequence length varying from the image.
This unification empowers LaVIT to serve as an impressive generalist interface to understand and generate multi-modal content simultaneously.
arXiv Detail & Related papers (2023-09-09T03:01:38Z) - Neural Authorship Attribution: Stylometric Analysis on Large Language
Models [16.63955074133222]
Large language models (LLMs) such as GPT-4, PaLM, and Llama have significantly propelled the generation of AI-crafted text.
With rising concerns about their potential misuse, there is a pressing need for AI-generated-text forensics.
arXiv Detail & Related papers (2023-08-14T17:46:52Z) - PoetryDiffusion: Towards Joint Semantic and Metrical Manipulation in
Poetry Generation [58.36105306993046]
Controllable text generation is a challenging and meaningful field in natural language generation (NLG)
In this paper, we pioneer the use of the Diffusion model for generating sonnets and Chinese SongCi poetry.
Our model outperforms existing models in automatic evaluation of semantic, metrical, and overall performance as well as human evaluation.
arXiv Detail & Related papers (2023-06-14T11:57:31Z) - BACON: Deep-Learning Powered AI for Poetry Generation with Author
Linguistic Style Transfer [91.3755431537592]
This paper describes BACON, a prototype of an automatic poetry generator with author linguistic style transfer.
It combines concepts and techniques from finite state machinery, probabilistic models, artificial neural networks and deep learning, to write original poetry with rich aesthetic-qualities in the style of any given author.
arXiv Detail & Related papers (2021-12-14T00:08:36Z) - Prose2Poem: The Blessing of Transformers in Translating Prose to Persian
Poetry [2.15242029196761]
We introduce a novel Neural Machine Translation (NMT) approach to translate prose to ancient Persian poetry.
We trained a Transformer model from scratch to obtain initial translations and pretrained different variations of BERT to obtain final translations.
arXiv Detail & Related papers (2021-09-30T09:04:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.