Generating Music and Generative Art from Brain activity
- URL: http://arxiv.org/abs/2108.04316v2
- Date: Thu, 12 Aug 2021 05:14:22 GMT
- Title: Generating Music and Generative Art from Brain activity
- Authors: Ricardo Andres Diaz Rincon
- Abstract summary: This research work introduces a computational system for creating generative art using a Brain-Computer Interface (BCI)
The generated artwork uses brain signals and concepts of geometry, color and spatial location to give complexity to the autonomous construction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Nowadays, technological advances have influenced all human activities,
creating new dynamics and ways of communication. In this context, some artists
have incorporated these advances in their creative process, giving rise to
unique aesthetic expressions referred to in the literature as Generative Art,
which is characterized by assigning part of the creative process to a system
that acts with certain autonomy (Galanter, 2003).
This research work introduces a computational system for creating generative
art using a Brain-Computer Interface (BCI) which portrays the user's brain
activity in a digital artwork. In this way, the user takes an active role in
the creative process. In aims of showing that the proposed system materializes
in an artistic piece the user's mental states by means of a visual and sound
representation, several tests are carried out to ensure the reliability of the
BCI device sent data.
The generated artwork uses brain signals and concepts of geometry, color and
spatial location to give complexity to the autonomous construction. As an added
value, the visual and auditory production is accompanied by an olfactory and
kinesthetic component which complements the art pieces providing a multimodal
communication character.
Related papers
- Bridging Paintings and Music -- Exploring Emotion based Music Generation through Paintings [10.302353984541497]
This research develops a model capable of generating music that resonates with the emotions depicted in visual arts.
Addressing the scarcity of aligned art and music data, we curated the Emotion Painting Music dataset.
Our dual-stage framework converts images to text descriptions of emotional content and then transforms these descriptions into music, facilitating efficient learning with minimal data.
arXiv Detail & Related papers (2024-09-12T08:19:25Z) - Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - Equivalence: An analysis of artists' roles with Image Generative AI from Conceptual Art perspective through an interactive installation design practice [16.063735487844628]
This study explores how artists interact with advanced text-to-image Generative AI models.
To exemplify this framework, a case study titled "Equivalence" converts users' speech input into continuously evolving paintings.
This work aims to broaden our understanding of artists' roles and foster a deeper appreciation for the creative aspects inherent in artwork created with Image Generative AI.
arXiv Detail & Related papers (2024-04-29T02:45:23Z) - CreativeSynth: Creative Blending and Synthesis of Visual Arts based on
Multimodal Diffusion [74.44273919041912]
Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.
However, adapting these models for artistic image editing presents two significant challenges.
We build the innovative unified framework Creative Synth, which is based on a diffusion model with the ability to coordinate multimodal inputs.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - Digital Life Project: Autonomous 3D Characters with Social Intelligence [86.2845109451914]
Digital Life Project is a framework utilizing language as the universal medium to build autonomous 3D characters.
Our framework comprises two primary components: SocioMind and MoMat-MoGen.
arXiv Detail & Related papers (2023-12-07T18:58:59Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - ViNTER: Image Narrative Generation with Emotion-Arc-Aware Transformer [59.05857591535986]
We propose a model called ViNTER to generate image narratives that focus on time series representing varying emotions as "emotion arcs"
We present experimental results of both manual and automatic evaluations.
arXiv Detail & Related papers (2022-02-15T10:53:08Z) - AI-based artistic representation of emotions from EEG signals: a
discussion on fairness, inclusion, and aesthetics [2.6928226868848864]
We present an AI-based Brain-Computer Interface (BCI) in which humans and machines interact to express feelings artistically.
We seek to understand the dynamics of this interaction to reach better co-existence in fairness, inclusion, and aesthetics.
arXiv Detail & Related papers (2022-02-07T14:51:02Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Empathic AI Painter: A Computational Creativity System with Embodied
Conversational Interaction [3.5450828190071655]
This paper documents our attempt to computationally model the creative process of a portrait painter.
Our system includes an empathic conversational interaction component to capture the dominant personality category of the user.
A generative AI Portraiture system that uses this categorization to create a personalized stylization of the user's portrait.
arXiv Detail & Related papers (2020-05-28T18:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.