"More Than Words": Linking Music Preferences and Moral Values Through
Lyrics
- URL: http://arxiv.org/abs/2209.01169v1
- Date: Fri, 2 Sep 2022 16:58:52 GMT
- Title: "More Than Words": Linking Music Preferences and Moral Values Through
Lyrics
- Authors: Vjosa Preniqi, Kyriaki Kalimeri, Charalampos Saitis
- Abstract summary: This study explores the association between music preferences and moral values by applying text analysis techniques to lyrics.
We align psychometric scores of 1,386 users to lyrics from the top 5 songs of their preferred music artists as emerged from Facebook Page Likes.
A machine learning framework was designed to exploit regression approaches and evaluate the predictive power of lyrical features for inferring moral values.
- Score: 2.3204178451683264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study explores the association between music preferences and moral
values by applying text analysis techniques to lyrics. Harvesting data from a
Facebook-hosted application, we align psychometric scores of 1,386 users to
lyrics from the top 5 songs of their preferred music artists as emerged from
Facebook Page Likes. We extract a set of lyrical features related to each
song's overarching narrative, moral valence, sentiment, and emotion. A machine
learning framework was designed to exploit regression approaches and evaluate
the predictive power of lyrical features for inferring moral values. Results
suggest that lyrics from top songs of artists people like inform their
morality. Virtues of hierarchy and tradition achieve higher prediction scores
($.20 \leq r \leq .30$) than values of empathy and equality ($.08 \leq r \leq
.11$), while basic demographic variables only account for a small part in the
models' explainability. This shows the importance of music listening
behaviours, as assessed via lyrical preferences, alone in capturing moral
values. We discuss the technological and musicological implications and
possible future improvements.
Related papers
- MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Exploring and Applying Audio-Based Sentiment Analysis in Music [0.0]
The ability of a computational model to interpret musical emotions is largely unexplored.
This study seeks to (1) predict the emotion of a musical clip over time and (2) determine the next emotion value after the music in a time series to ensure seamless transitions.
arXiv Detail & Related papers (2024-02-22T22:34:06Z) - Are Words Enough? On the semantic conditioning of affective music
generation [1.534667887016089]
This scoping review aims to analyze and discuss the possibilities of music generation conditioned by emotions.
In detail, we review two main paradigms adopted in automatic music generation: rules-based and machine-learning models.
We conclude that overcoming the limitation and ambiguity of language to express emotions through music has the potential to impact the creative industries.
arXiv Detail & Related papers (2023-11-07T00:19:09Z) - Unsupervised Melody-to-Lyric Generation [91.29447272400826]
We propose a method for generating high-quality lyrics without training on any aligned melody-lyric data.
We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints.
Our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines.
arXiv Detail & Related papers (2023-05-30T17:20:25Z) - Unsupervised Melody-Guided Lyrics Generation [84.22469652275714]
We propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data.
We leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
arXiv Detail & Related papers (2023-05-12T20:57:20Z) - Modelling Emotion Dynamics in Song Lyrics with State Space Models [4.18804572788063]
We propose a method to predict emotion dynamics in song lyrics without song-level supervision.
Our experiments show that applying our method consistently improves the performance of sentence-level baselines without requiring any annotated songs.
arXiv Detail & Related papers (2022-10-17T21:07:23Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - The Contribution of Lyrics and Acoustics to Collaborative Understanding
of Mood [7.426508199697412]
We study the association between song lyrics and mood through a data-driven analysis.
Our data set consists of nearly one million songs, with song-mood associations derived from user playlists on the Spotify streaming platform.
We take advantage of state-of-the-art natural language processing models based on transformers to learn the association between the lyrics and moods.
arXiv Detail & Related papers (2022-05-31T19:58:41Z) - Modelling Moral Traits with Music Listening Preferences and Demographics [2.3204178451683264]
We explore the association between music genre preferences, demographics and moral values by exploring self-reported data from an online survey administered in Canada.
Our results show the importance of music in predicting a person's moral values (.55-.69 AUROC); while knowledge of basic demographic features such as age and gender is enough to increase the performance.
arXiv Detail & Related papers (2021-07-01T10:26:29Z) - SongMASS: Automatic Song Writing with Pre-training and Alignment
Constraint [54.012194728496155]
SongMASS is proposed to overcome the challenges of lyric-to-melody generation and melody-to-lyric generation.
It leverages masked sequence to sequence (MASS) pre-training and attention based alignment modeling.
We show that SongMASS generates lyric and melody with significantly better quality than the baseline method.
arXiv Detail & Related papers (2020-12-09T16:56:59Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.