Psychologically-Inspired Music Recommendation System
- URL: http://arxiv.org/abs/2205.03459v1
- Date: Fri, 6 May 2022 19:38:26 GMT
- Title: Psychologically-Inspired Music Recommendation System
- Authors: Danila Rozhevskii, Jie Zhu, Boyuan Zhao
- Abstract summary: We seek to relate the personality and the current emotional state of the listener to the audio features in order to build an emotion-aware MRS.
We compare the results both quantitatively and qualitatively to the output of the traditional MRS based on the Spotify API data to understand if our advancements make a significant impact on the quality of music recommendations.
- Score: 3.032299122358857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last few years, automated recommendation systems have been a major
focus in the music field, where companies such as Spotify, Amazon, and Apple
are competing in the ability to generate the most personalized music
suggestions for their users. One of the challenges developers still fail to
tackle is taking into account the psychological and emotional aspects of the
music. Our goal is to find a way to integrate users' personal traits and their
current emotional state into a single music recommendation system with both
collaborative and content-based filtering. We seek to relate the personality
and the current emotional state of the listener to the audio features in order
to build an emotion-aware MRS. We compare the results both quantitatively and
qualitatively to the output of the traditional MRS based on the Spotify API
data to understand if our advancements make a significant impact on the quality
of music recommendations.
Related papers
- SoundSignature: What Type of Music Do You Like? [0.0]
SoundSignature is a music application that integrates a custom OpenAI Assistant to analyze users' favorite songs.
The system incorporates state-of-the-art Music Information Retrieval (MIR) Python packages to combine extracted acoustic/musical features with the assistant's extensive knowledge of the artists and bands.
arXiv Detail & Related papers (2024-10-04T12:40:45Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Personalized Music Recommendation with a Heterogeneity-aware Deep Bayesian Network [8.844728473984766]
We propose a Heterogeneity-aware Deep Bayesian Network (HDBN) to model these assumptions.
The HDBN mimics a user's decision process to choose music with four components: personalized prior user emotion distribution modeling, posterior user emotion distribution modeling, user grouping, and Bayesian neural network-based music mood preference prediction.
arXiv Detail & Related papers (2024-06-20T08:12:11Z) - MusicRL: Aligning Music Generation to Human Preferences [62.44903326718772]
MusicRL is the first music generation system finetuned from human feedback.
We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences.
We train MusicRL-U, the first text-to-music model that incorporates human feedback at scale.
arXiv Detail & Related papers (2024-02-06T18:36:52Z) - Emotion-Aware Music Recommendation System: Enhancing User Experience
Through Real-Time Emotional Context [1.3812010983144802]
This study addresses the deficiency in conventional music recommendation systems by focusing on the vital role of emotions in shaping users music choices.
It introduces an AI model that incorporates emotional context into the song recommendation process.
By accurately detecting users real-time emotions, the model can generate personalized song recommendations that align with the users emotional state.
arXiv Detail & Related papers (2023-11-17T05:55:36Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Time-Aware Music Recommender Systems: Modeling the Evolution of Implicit
User Preferences and User Listening Habits in A Collaborative Filtering
Approach [4.576379639081977]
This paper studies the temporal information regarding when songs are played.
The purpose is to model both the evolution of user preferences in the form of evolving implicit ratings and user listening behavior.
In the collaborative filtering method proposed in this work, daily listening habits are captured in order to characterize users and provide them with more reliable recommendations.
arXiv Detail & Related papers (2020-08-26T08:00:11Z) - Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis [91.3755431537592]
This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective.
The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone.
The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification.
arXiv Detail & Related papers (2020-02-01T17:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.