TraitSpaces: Towards Interpretable Visual Creativity for Human-AI Co-Creation
- URL: http://arxiv.org/abs/2509.24326v1
- Date: Mon, 29 Sep 2025 06:24:18 GMT
- Title: TraitSpaces: Towards Interpretable Visual Creativity for Human-AI Co-Creation
- Authors: Prerna Luthra,
- Abstract summary: Drawing on interviews with practicing artists and theories from psychology, we define 12 traits that capture affective, symbolic, cultural, and ethical dimensions of creativity.<n>Traits such as Environmental Dialogicity and Redemptive Arc are predicted with high reliability.<n>By linking cultural-aesthetic insights with computational modeling, our work aims not to reduce creativity to numbers, but to offer shared language and interpretable tools for artists, researchers, and AI systems to collaborate meaningfully.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a psychologically grounded and artist-informed framework for modeling visual creativity across four domains: Inner, Outer, Imaginative, and Moral Worlds. Drawing on interviews with practicing artists and theories from psychology, we define 12 traits that capture affective, symbolic, cultural, and ethical dimensions of creativity.Using 20k artworks from the SemArt dataset, we annotate images with GPT 4.1 using detailed, theory-aligned prompts, and evaluate the learnability of these traits from CLIP image embeddings. Traits such as Environmental Dialogicity and Redemptive Arc are predicted with high reliability ($R^2 \approx 0.64 - 0.68$), while others like Memory Imprint remain challenging, highlighting the limits of purely visual encoding. Beyond technical metrics, we visualize a "creativity trait-space" and illustrate how it can support interpretable, trait-aware co-creation - e.g., sliding along a Redemptive Arc axis to explore works of adversity and renewal. By linking cultural-aesthetic insights with computational modeling, our work aims not to reduce creativity to numbers, but to offer shared language and interpretable tools for artists, researchers, and AI systems to collaborate meaningfully.
Related papers
- Bridging Cognitive Gap: Hierarchical Description Learning for Artistic Image Aesthetics Assessment [51.40989269202702]
aesthetic quality assessment task is crucial for developing a human-aligned quantitative evaluation system for AIGC.<n>We propose ArtQuant, an aesthetics assessment framework for artistic images which couples isolated aesthetic dimensions through description generation.<n>Our approach achieves epoch state-of-the-art performance on several datasets while requiring only 33% of conventional trainings.
arXiv Detail & Related papers (2025-12-29T12:18:26Z) - Simple Lines, Big Ideas: Towards Interpretable Assessment of Human Creativity from Drawings [18.09092203643732]
We propose a data-driven framework for automatic and interpretable creativity assessment from drawings.<n>Motivated by the cognitive evidence proposed in [6] that creativity can emerge from both what is drawn (content) and how it is drawn (style), we reinterpret the creativity score as a function of these two complementary dimensions.
arXiv Detail & Related papers (2025-11-17T02:16:01Z) - Does CLIP perceive art the same way we do? [0.0]
We investigate CLIP's ability to extract high-level semantic and stylistic information from paintings.<n>Our findings reveal both strengths and limitations in CLIP's visual representations.<n>Our work highlights the need for deeper interpretability in multimodal systems.
arXiv Detail & Related papers (2025-05-08T13:21:10Z) - Compose Your Aesthetics: Empowering Text-to-Image Models with the Principles of Art [61.28133495240179]
We propose a novel task of aesthetics alignment which seeks to align user-specified aesthetics with the T2I generation output.<n>Inspired by how artworks provide an invaluable perspective to approach aesthetics, we codify visual aesthetics using the compositional framework artists employ.<n>We demonstrate that T2I DMs can effectively offer 10 compositional controls through user-specified PoA conditions.
arXiv Detail & Related papers (2025-03-15T06:58:09Z) - Pencils to Pixels: A Systematic Study of Creative Drawings across Children, Adults and AI [0.0]
We analyze 1338 drawings by children, adults and AI on a creative drawing task.<n>For style, we define measures of ink density, ink distribution and number of elements.<n>For content, we use expert-annotated categories to study conceptual diversity.<n>We build simple models to predict expert and automated creativity scores.
arXiv Detail & Related papers (2025-02-09T19:02:32Z) - Alien Recombination: Exploring Concept Blends Beyond Human Cognitive Availability in Visual Art [90.8684263806649]
We show how AI can transcend human cognitive limitations in visual art creation.
Our research hypothesizes that visual art contains a vast unexplored space of conceptual combinations.
We present the Alien Recombination method to identify and generate concept combinations that lie beyond human cognitive availability.
arXiv Detail & Related papers (2024-11-18T11:55:38Z) - Unlocking Comics: The AI4VA Dataset for Visual Understanding [62.345344799258804]
This paper presents a novel dataset comprising Franco-Belgian comics from the 1950s annotated for tasks including depth estimation, semantic segmentation, saliency detection, and character identification.
It consists of two distinct and consistent styles and incorporates object concepts and labels taken from natural images.
By including such diverse information across styles, this dataset not only holds promise for computational creativity but also offers avenues for the digitization of art and storytelling innovation.
arXiv Detail & Related papers (2024-10-27T14:27:05Z) - Computational Modeling of Artistic Inspiration: A Framework for Predicting Aesthetic Preferences in Lyrical Lines Using Linguistic and Stylistic Features [8.205321096201095]
Artistic inspiration plays a crucial role in producing works that resonate deeply with audiences.
This work proposes a novel framework for computationally modeling artistic preferences in different individuals.
Our framework outperforms an out-of-the-box LLaMA-3-70b, a state-of-the-art open-source language model, by nearly 18 points.
arXiv Detail & Related papers (2024-10-03T18:10:16Z) - Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Graph Neural Networks for Knowledge Enhanced Visual Representation of Paintings [12.724750260261066]
ArtSAGENet is a novel architecture that integrates Graph Neural Networks (GNNs) and Convolutional Neural Networks (CNNs)<n>We show that our proposed ArtSAGENet captures and encodes valuable dependencies between the artists and the artworks.<n>Our findings underline a great potential of integrating visual content and semantics for fine art analysis and curation.
arXiv Detail & Related papers (2021-05-17T23:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.