Exploring the Emotional Landscape of Music: An Analysis of Valence
Trends and Genre Variations in Spotify Music Data
- URL: http://arxiv.org/abs/2310.19052v1
- Date: Sun, 29 Oct 2023 15:57:31 GMT
- Title: Exploring the Emotional Landscape of Music: An Analysis of Valence
Trends and Genre Variations in Spotify Music Data
- Authors: Shruti Dutta, Shashwat Mookherjee
- Abstract summary: This paper conducts an intricate analysis of musical emotions and trends using Spotify music data.
Employing regression modeling, temporal analysis, mood transitions, and genre investigation, the study uncovers patterns within music-emotion relationships.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper conducts an intricate analysis of musical emotions and trends
using Spotify music data, encompassing audio features and valence scores
extracted through the Spotipi API. Employing regression modeling, temporal
analysis, mood transitions, and genre investigation, the study uncovers
patterns within music-emotion relationships. Regression models linear, support
vector, random forest, and ridge, are employed to predict valence scores.
Temporal analysis reveals shifts in valence distribution over time, while mood
transition exploration illuminates emotional dynamics within playlists. The
research contributes to nuanced insights into music's emotional fabric,
enhancing comprehension of the interplay between music and emotions through
years.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - Are We There Yet? A Brief Survey of Music Emotion Prediction Datasets, Models and Outstanding Challenges [9.62904012066486]
We provide a comprehensive overview of the available music-emotion datasets and discuss evaluation standards as well as competitions in the field.
We highlight the challenges that persist in accurately capturing emotion in music, including issues related to dataset quality, annotation consistency, and model generalization.
We have complemented our findings with an accompanying GitHub repository.
arXiv Detail & Related papers (2024-06-13T05:00:27Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Exploring and Applying Audio-Based Sentiment Analysis in Music [0.0]
The ability of a computational model to interpret musical emotions is largely unexplored.
This study seeks to (1) predict the emotion of a musical clip over time and (2) determine the next emotion value after the music in a time series to ensure seamless transitions.
arXiv Detail & Related papers (2024-02-22T22:34:06Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - REMAST: Real-time Emotion-based Music Arrangement with Soft Transition [29.34094293561448]
Music as an emotional intervention medium has important applications in scenarios such as music therapy, games, and movies.
We propose REMAST to achieve emotion real-time fit and smooth transition simultaneously.
According to the evaluation results, REMAST surpasses the state-of-the-art methods in objective and subjective metrics.
arXiv Detail & Related papers (2023-05-14T00:09:48Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - Modelling Moral Traits with Music Listening Preferences and Demographics [2.3204178451683264]
We explore the association between music genre preferences, demographics and moral values by exploring self-reported data from an online survey administered in Canada.
Our results show the importance of music in predicting a person's moral values (.55-.69 AUROC); while knowledge of basic demographic features such as age and gender is enough to increase the performance.
arXiv Detail & Related papers (2021-07-01T10:26:29Z) - Musical Prosody-Driven Emotion Classification: Interpreting Vocalists
Portrayal of Emotions Through Machine Learning [0.0]
The role of musical prosody remains under-explored despite several studies demonstrating a strong connection between prosody and emotion.
In this study, we restrict the input of traditional machine learning algorithms to the features of musical prosody.
We utilize a methodology for individual data collection from vocalists, and personal ground truth labeling by the artist themselves.
arXiv Detail & Related papers (2021-06-04T15:40:19Z) - Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis [91.3755431537592]
This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective.
The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone.
The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification.
arXiv Detail & Related papers (2020-02-01T17:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.