AIwriting: Relations Between Image Generation and Digital Writing
- URL: http://arxiv.org/abs/2305.10834v1
- Date: Thu, 18 May 2023 09:23:05 GMT
- Title: AIwriting: Relations Between Image Generation and Digital Writing
- Authors: Scott Rettberg, Talan Memmott, Jill Walker Rettberg, Jason Nelson and
Patrick Lichty
- Abstract summary: During 2022, AI text generation systems such as GPT-3 and AI text-to-image generation systems such as DALL-E 2 made exponential leaps forward.
In this panel a group of electronic literature authors and theorists consider new oppor-tunities for human creativity presented by these systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: During 2022, both transformer-based AI text generation sys-tems such as GPT-3
and AI text-to-image generation systems such as DALL-E 2 and Stable Diffusion
made exponential leaps forward and are unquestionably altering the fields of
digital art and electronic literature. In this panel a group of electronic
literature authors and theorists consider new oppor-tunities for human
creativity presented by these systems and present new works have produced
during the past year that specifically address these systems as environments
for literary expressions that are translated through iterative interlocutive
processes into visual representations. The premise that binds these
presentations is that these systems and the works gener-ated must be considered
from a literary perspective, as they originate in human writing. In works
ranging from a visual memoir of the personal experience of a health crisis, to
interac-tive web comics, to architectures based on abstract poetic language, to
political satire, four artists explore the capabili-ties of these writing
environments for new genres of literary artist practice, while a digital
culture theorist considers the origins and effects of the particular training
datasets of human language and images on which these new hybrid forms are
based.
Related papers
- A Perspective on Literary Metaphor in the Context of Generative AI [0.6445605125467572]
This study explores the role of literary metaphor and its capacity to generate a range of meanings.
To investigate whether the inclusion of original figurative language improves textual quality, we trained an LSTM-based language model in Afrikaans.
The paper raises thought-provoking questions on aesthetic value, interpretation and evaluation.
arXiv Detail & Related papers (2024-09-02T08:27:29Z) - Illustrating Classic Brazilian Books using a Text-To-Image Diffusion Model [0.4374837991804086]
Latent Diffusion Models (LDMs) signifies a paradigm shift in the domain of AI capabilities.
This article delves into the feasibility of employing the Stable Diffusion LDM to illustrate literary works.
arXiv Detail & Related papers (2024-08-01T13:28:15Z) - State of the Art on Diffusion Models for Visual Computing [191.6168813012954]
This report introduces the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model.
We also give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing.
We discuss available datasets, metrics, open challenges, and social implications.
arXiv Detail & Related papers (2023-10-11T05:32:29Z) - SciMON: Scientific Inspiration Machines Optimized for Novelty [68.46036589035539]
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
We take a dramatic departure with a novel setting in which models use as input background contexts.
We present SciMON, a modeling framework that uses retrieval of "inspirations" from past scientific papers.
arXiv Detail & Related papers (2023-05-23T17:12:08Z) - Language Does More Than Describe: On The Lack Of Figurative Speech in
Text-To-Image Models [63.545146807810305]
Text-to-image diffusion models can generate high-quality pictures from textual input prompts.
These models have been trained using text data collected from content-based labelling protocols.
We characterise the sentimentality, objectiveness and degree of abstraction of publicly available text data used to train current text-to-image diffusion models.
arXiv Detail & Related papers (2022-10-19T14:20:05Z) - Visualize Before You Write: Imagination-Guided Open-Ended Text
Generation [68.96699389728964]
We propose iNLG that uses machine-generated images to guide language models in open-ended text generation.
Experiments and analyses demonstrate the effectiveness of iNLG on open-ended text generation tasks.
arXiv Detail & Related papers (2022-10-07T18:01:09Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - A Taxonomy of Prompt Modifiers for Text-To-Image Generation [6.903929927172919]
This paper identifies six types of prompt modifier used by practitioners in the online community based on a 3-month ethnography study.
The novel taxonomy of prompt modifier provides researchers a conceptual starting point for investigating the practice of text-to-image generation.
We discuss research opportunities of this novel creative practice in the field of Human-Computer Interaction.
arXiv Detail & Related papers (2022-04-20T06:15:50Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - Multiversal views on language models [0.0]
We present a framework in which generative language models are conceptualized as multiverse generators.
This framework also applies to human imagination and is core to how we read and write fiction.
We call for exploration into this commonality through new forms of interfaces which allow humans to couple their imagination to AI to write.
arXiv Detail & Related papers (2021-02-12T08:28:28Z) - A Framework and Dataset for Abstract Art Generation via CalligraphyGAN [0.0]
We present a creative framework based on Conditional Generative Adversarial Networks and Contextual Neural Language Model to generate abstract artworks.
Our work is inspired by Chinese calligraphy, which is a unique form of visual art where the character itself is an aesthetic painting.
arXiv Detail & Related papers (2020-12-02T16:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.