Link Me Baby One More Time: Social Music Discovery on Spotify
- URL: http://arxiv.org/abs/2401.08818v2
- Date: Tue, 7 May 2024 09:02:21 GMT
- Title: Link Me Baby One More Time: Social Music Discovery on Spotify
- Authors: Shazia'Ayn Babul, Desislava Hristova, Antonio Lima, Renaud Lambiotte, Mariano Beguerisse-Díaz,
- Abstract summary: We use data from Spotify to investigate how a link sent from one user to another results in the receiver engaging with the music of the shared artist.
We consider several factors that may influence this process, such as the strength of the sender-receiver relationship, the user's role in the Spotify social network, their music social cohesion, and how similar the new artist is to the receiver's taste.
- Score: 0.3495246564946556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the social and contextual factors that influence the outcome of person-to-person music recommendations and discovery. Specifically, we use data from Spotify to investigate how a link sent from one user to another results in the receiver engaging with the music of the shared artist. We consider several factors that may influence this process, such as the strength of the sender-receiver relationship, the user's role in the Spotify social network, their music social cohesion, and how similar the new artist is to the receiver's taste. We find that the receiver of a link is more likely to engage with a new artist when (1) they have similar music taste to the sender and the shared track is a good fit for their taste, (2) they have a stronger and more intimate tie with the sender, and (3) the shared artist is popular amongst the receiver's connections. Finally, we use these findings to build a Random Forest classifier to predict whether a shared music track will result in the receiver's engagement with the shared artist. This model elucidates which type of social and contextual features are most predictive, although peak performance is achieved when a diverse set of features are included. These findings provide new insights into the multifaceted mechanisms underpinning the interplay between music discovery and social processes.
Related papers
- MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - A Dataset and Baselines for Measuring and Predicting the Music Piece Memorability [16.18336216092687]
We focus on measuring and predicting music memorability.
We train baselines to predict and analyze music memorability.
We demonstrate that while there is room for improvement, predicting music memorability with limited data is possible.
arXiv Detail & Related papers (2024-05-21T14:57:04Z) - MusicRL: Aligning Music Generation to Human Preferences [62.44903326718772]
MusicRL is the first music generation system finetuned from human feedback.
We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences.
We train MusicRL-U, the first text-to-music model that incorporates human feedback at scale.
arXiv Detail & Related papers (2024-02-06T18:36:52Z) - "All of Me": Mining Users' Attributes from their Public Spotify
Playlists [18.77632404384041]
People create and publicly share their own playlists to express their musical tastes.
These publicly accessible playlists serve as sources of rich insights into users' attributes and identities.
We focus on identifying recurring musical characteristics associated with users' individual attributes, such as demographics, habits, or personality traits.
arXiv Detail & Related papers (2024-01-25T16:38:06Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - Psychologically-Inspired Music Recommendation System [3.032299122358857]
We seek to relate the personality and the current emotional state of the listener to the audio features in order to build an emotion-aware MRS.
We compare the results both quantitatively and qualitatively to the output of the traditional MRS based on the Spotify API data to understand if our advancements make a significant impact on the quality of music recommendations.
arXiv Detail & Related papers (2022-05-06T19:38:26Z) - Contrastive Learning with Positive-Negative Frame Mask for Music
Representation [91.44187939465948]
This paper proposes a novel Positive-nEgative frame mask for Music Representation based on the contrastive learning framework, abbreviated as PEMR.
We devise a novel contrastive learning objective to accommodate both self-augmented positives/negatives sampled from the same music.
arXiv Detail & Related papers (2022-03-17T07:11:42Z) - Content-based Music Similarity with Triplet Networks [21.220806977978853]
We explore the feasibility of using triplet neural networks to embed songs based on content-based music similarity.
Our network is trained using triplets of songs such that two songs by the same artist are embedded closer to one another than to a third song by a different artist.
arXiv Detail & Related papers (2020-08-11T18:10:02Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.