AffectMachine-Classical: A novel system for generating affective
classical music
- URL: http://arxiv.org/abs/2304.04915v1
- Date: Tue, 11 Apr 2023 01:06:26 GMT
- Title: AffectMachine-Classical: A novel system for generating affective
classical music
- Authors: Kat R. Agres, Adyasha Dash, Phoebe Chua
- Abstract summary: AffectMachine-Classical is capable of generating affective Classic music in real-time.
A listener study was conducted to validate the ability of the system to reliably convey target emotions to listeners.
Future work will embed AffectMachine-Classical into biofeedback systems, to leverage the efficacy of the affective music for emotional well-being in listeners.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work introduces a new music generation system, called
AffectMachine-Classical, that is capable of generating affective Classic music
in real-time. AffectMachine was designed to be incorporated into biofeedback
systems (such as brain-computer-interfaces) to help users become aware of, and
ultimately mediate, their own dynamic affective states. That is, this system
was developed for music-based MedTech to support real-time emotion
self-regulation in users. We provide an overview of the rule-based,
probabilistic system architecture, describing the main aspects of the system
and how they are novel. We then present the results of a listener study that
was conducted to validate the ability of the system to reliably convey target
emotions to listeners. The findings indicate that AffectMachine-Classical is
very effective in communicating various levels of Arousal ($R^2 = .96$) to
listeners, and is also quite convincing in terms of Valence (R^2 = .90). Future
work will embed AffectMachine-Classical into biofeedback systems, to leverage
the efficacy of the affective music for emotional well-being in listeners.
Related papers
- MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Bio-inspired spike-based Hippocampus and Posterior Parietal Cortex
models for robot navigation and environment pseudo-mapping [52.77024349608834]
This work proposes a spike-based robotic navigation and environment pseudomapping system.
The hippocampus is in charge of maintaining a representation of an environment state map, and the PPC is in charge of local decision-making.
This is the first implementation of an environment pseudo-mapping system with dynamic learning based on a bio-inspired hippocampal memory.
arXiv Detail & Related papers (2023-05-22T10:20:34Z) - Towards personalised music-therapy; a neurocomputational modelling
perspective [7.642617497821302]
Music therapy has emerged as a successful intervention that improves patient's outcome in a large range of neurological and mood disorders without adverse effects.
Brain networks are entrained to music in ways that can be explained both via top-down and bottom-up processes.
arXiv Detail & Related papers (2023-05-15T19:42:04Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - Psychologically-Inspired Music Recommendation System [3.032299122358857]
We seek to relate the personality and the current emotional state of the listener to the audio features in order to build an emotion-aware MRS.
We compare the results both quantitatively and qualitatively to the output of the traditional MRS based on the Spotify API data to understand if our advancements make a significant impact on the quality of music recommendations.
arXiv Detail & Related papers (2022-05-06T19:38:26Z) - Musical Prosody-Driven Emotion Classification: Interpreting Vocalists
Portrayal of Emotions Through Machine Learning [0.0]
The role of musical prosody remains under-explored despite several studies demonstrating a strong connection between prosody and emotion.
In this study, we restrict the input of traditional machine learning algorithms to the features of musical prosody.
We utilize a methodology for individual data collection from vocalists, and personal ground truth labeling by the artist themselves.
arXiv Detail & Related papers (2021-06-04T15:40:19Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - A Human-Computer Duet System for Music Performance [7.777761975348974]
We create a virtual violinist who can collaborate with a human pianist to perform chamber music automatically without any intervention.
The system incorporates the techniques from various fields, including real-time music tracking, pose estimation, and body movement generation.
The proposed system has been validated in public concerts.
arXiv Detail & Related papers (2020-09-16T17:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.