Psychologically-Inspired Music Recommendation System
- URL: http://arxiv.org/abs/2205.03459v1
- Date: Fri, 6 May 2022 19:38:26 GMT
- Title: Psychologically-Inspired Music Recommendation System
- Authors: Danila Rozhevskii, Jie Zhu, Boyuan Zhao
- Abstract summary: We seek to relate the personality and the current emotional state of the listener to the audio features in order to build an emotion-aware MRS.
We compare the results both quantitatively and qualitatively to the output of the traditional MRS based on the Spotify API data to understand if our advancements make a significant impact on the quality of music recommendations.
- Score: 3.032299122358857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last few years, automated recommendation systems have been a major
focus in the music field, where companies such as Spotify, Amazon, and Apple
are competing in the ability to generate the most personalized music
suggestions for their users. One of the challenges developers still fail to
tackle is taking into account the psychological and emotional aspects of the
music. Our goal is to find a way to integrate users' personal traits and their
current emotional state into a single music recommendation system with both
collaborative and content-based filtering. We seek to relate the personality
and the current emotional state of the listener to the audio features in order
to build an emotion-aware MRS. We compare the results both quantitatively and
qualitatively to the output of the traditional MRS based on the Spotify API
data to understand if our advancements make a significant impact on the quality
of music recommendations.
Related papers
- Personalized Music Recommendation with a Heterogeneity-aware Deep Bayesian Network [8.844728473984766]
We propose a Heterogeneity-aware Deep Bayesian Network (HDBN) to model these assumptions.
The HDBN mimics a user's decision process to choose music with four components: personalized prior user emotion distribution modeling, posterior user emotion distribution modeling, user grouping, and Bayesian neural network-based music mood preference prediction.
arXiv Detail & Related papers (2024-06-20T08:12:11Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - MusicRL: Aligning Music Generation to Human Preferences [62.44903326718772]
MusicRL is the first music generation system finetuned from human feedback.
We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences.
We train MusicRL-U, the first text-to-music model that incorporates human feedback at scale.
arXiv Detail & Related papers (2024-02-06T18:36:52Z) - Emotion-Aware Music Recommendation System: Enhancing User Experience
Through Real-Time Emotional Context [1.3812010983144802]
This study addresses the deficiency in conventional music recommendation systems by focusing on the vital role of emotions in shaping users music choices.
It introduces an AI model that incorporates emotional context into the song recommendation process.
By accurately detecting users real-time emotions, the model can generate personalized song recommendations that align with the users emotional state.
arXiv Detail & Related papers (2023-11-17T05:55:36Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Musical Prosody-Driven Emotion Classification: Interpreting Vocalists
Portrayal of Emotions Through Machine Learning [0.0]
The role of musical prosody remains under-explored despite several studies demonstrating a strong connection between prosody and emotion.
In this study, we restrict the input of traditional machine learning algorithms to the features of musical prosody.
We utilize a methodology for individual data collection from vocalists, and personal ground truth labeling by the artist themselves.
arXiv Detail & Related papers (2021-06-04T15:40:19Z) - Time-Aware Music Recommender Systems: Modeling the Evolution of Implicit
User Preferences and User Listening Habits in A Collaborative Filtering
Approach [4.576379639081977]
This paper studies the temporal information regarding when songs are played.
The purpose is to model both the evolution of user preferences in the form of evolving implicit ratings and user listening behavior.
In the collaborative filtering method proposed in this work, daily listening habits are captured in order to characterize users and provide them with more reliable recommendations.
arXiv Detail & Related papers (2020-08-26T08:00:11Z) - Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis [91.3755431537592]
This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective.
The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone.
The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification.
arXiv Detail & Related papers (2020-02-01T17:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.