AI-Generated Imagery: A New Era for the `Readymade'
- URL: http://arxiv.org/abs/2307.06033v1
- Date: Wed, 12 Jul 2023 09:25:56 GMT
- Title: AI-Generated Imagery: A New Era for the `Readymade'
- Authors: Amy Smith and Michael Cook
- Abstract summary: This paper aims to examine how digital images produced by generative AI systems have come to be so regularly referred to as art.
We employ existing philosophical frameworks and theories of language to suggest that some AI-generated imagery can be presented as readymades' for consideration as art.
- Score: 0.7386189738262202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the term `art' defies any concrete definition, this paper aims to
examine how digital images produced by generative AI systems, such as
Midjourney, have come to be so regularly referred to as such. The discourse
around the classification of AI-generated imagery as art is currently somewhat
homogeneous, lacking the more nuanced aspects that would apply to more
traditional modes of artistic media production. This paper aims to bring
important philosophical considerations to the surface of the discussion around
AI-generated imagery in the context of art. We employ existing philosophical
frameworks and theories of language to suggest that some AI-generated imagery,
by virtue of its visual properties within these frameworks, can be presented as
`readymades' for consideration as art.
Related papers
- Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion [51.931083971448885]
We propose a framework named Human Feedback Inversion (HFI), where human feedback on model-generated images is condensed into textual tokens guiding the mitigation or removal of problematic images.
Our experimental results demonstrate our framework significantly reduces objectionable content generation while preserving image quality, contributing to the ethical deployment of AI in the public sphere.
arXiv Detail & Related papers (2024-07-17T05:21:41Z) - Equivalence: An analysis of artists' roles with Image Generative AI from Conceptual Art perspective through an interactive installation design practice [16.063735487844628]
This study explores how artists interact with advanced text-to-image Generative AI models.
To exemplify this framework, a case study titled "Equivalence" converts users' speech input into continuously evolving paintings.
This work aims to broaden our understanding of artists' roles and foster a deeper appreciation for the creative aspects inherent in artwork created with Image Generative AI.
arXiv Detail & Related papers (2024-04-29T02:45:23Z) - AI Art Neural Constellation: Revealing the Collective and Contrastive
State of AI-Generated and Human Art [36.21731898719347]
We conduct a comprehensive analysis to position AI-generated art within the context of human art heritage.
Our comparative analysis is based on an extensive dataset, dubbed ArtConstellation''
Key finding is that AI-generated artworks are visually related to the principle concepts for modern period art made in 1800-2000.
arXiv Detail & Related papers (2024-02-04T11:49:51Z) - State of the Art on Diffusion Models for Visual Computing [191.6168813012954]
This report introduces the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model.
We also give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing.
We discuss available datasets, metrics, open challenges, and social implications.
arXiv Detail & Related papers (2023-10-11T05:32:29Z) - Artificial Intelligence and Aesthetic Judgment [29.71278672770529]
Generative AIs produce creative outputs in the style of human expression.
We argue that encounters with the outputs of modern generative AI models are mediated by the same kinds of aesthetic judgments.
arXiv Detail & Related papers (2023-08-21T17:40:54Z) - Learning to Evaluate the Artness of AI-generated Images [64.48229009396186]
ArtScore is a metric designed to evaluate the degree to which an image resembles authentic artworks by artists.
We employ pre-trained models for photo and artwork generation, resulting in a series of mixed models.
This dataset is then employed to train a neural network that learns to estimate quantized artness levels of arbitrary images.
arXiv Detail & Related papers (2023-05-08T17:58:27Z) - Language Does More Than Describe: On The Lack Of Figurative Speech in
Text-To-Image Models [63.545146807810305]
Text-to-image diffusion models can generate high-quality pictures from textual input prompts.
These models have been trained using text data collected from content-based labelling protocols.
We characterise the sentimentality, objectiveness and degree of abstraction of publicly available text data used to train current text-to-image diffusion models.
arXiv Detail & Related papers (2022-10-19T14:20:05Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - A Taxonomy of Prompt Modifiers for Text-To-Image Generation [6.903929927172919]
This paper identifies six types of prompt modifier used by practitioners in the online community based on a 3-month ethnography study.
The novel taxonomy of prompt modifier provides researchers a conceptual starting point for investigating the practice of text-to-image generation.
We discuss research opportunities of this novel creative practice in the field of Human-Computer Interaction.
arXiv Detail & Related papers (2022-04-20T06:15:50Z) - Covering the News with (AI) Style [2.3043762032257895]
We introduce a multi-modal discriminative and generative frame-work capable of assisting humans in producing visual content re-lated to a given theme.
Motivated by a request from the The New York Times (NYT) seeking help to use AI to create art for their special section on Artificial Intelligence, we demonstrated the application of our system in producing such image.
arXiv Detail & Related papers (2020-01-05T22:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.