Modeling Musical Genre Trajectories through Pathlet Learning
- URL: http://arxiv.org/abs/2505.03480v1
- Date: Tue, 06 May 2025 12:33:40 GMT
- Title: Modeling Musical Genre Trajectories through Pathlet Learning
- Authors: Lilian Marey, Charlotte Laclau, Bruno Sguerra, Tiphaine Viard, Manuel Moussallam,
- Abstract summary: This paper uses the dictionary learning paradigm to model user trajectories across different musical genres.<n>We define a new framework that captures recurring patterns in genre trajectories, called pathlets.<n>We show that pathlet learning reveals relevant listening patterns that can be analyzed both qualitatively and quantitatively.
- Score: 3.6133082266958616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing availability of user data on music streaming platforms opens up new possibilities for analyzing music consumption. However, understanding the evolution of user preferences remains a complex challenge, particularly as their musical tastes change over time. This paper uses the dictionary learning paradigm to model user trajectories across different musical genres. We define a new framework that captures recurring patterns in genre trajectories, called pathlets, enabling the creation of comprehensible trajectory embeddings. We show that pathlet learning reveals relevant listening patterns that can be analyzed both qualitatively and quantitatively. This work improves our understanding of users' interactions with music and opens up avenues of research into user behavior and fostering diversity in recommender systems. A dataset of 2000 user histories tagged by genre over 17 months, supplied by Deezer (a leading music streaming company), is also released with the code.
Related papers
- Familiarizing with Music: Discovery Patterns for Different Music Discovery Needs [9.363492538580681]
We analyze data from a survey answered by users of the major music streaming platform Deezer in combination with their streaming data.<n>We first address questions regarding whether users who declare a higher interest in unfamiliar music listen to more diverse music.<n>We then investigate which type of music tracks users choose to listen to when they explore unfamiliar music, identifying clear patterns of popularity and genre representativeness.
arXiv Detail & Related papers (2025-05-06T14:26:00Z) - Deconstructing Jazz Piano Style Using Machine Learning [0.9933900714070033]
We focus on musical style, which benefits from a rich theoretical and mathematical analysis tradition.<n>We train a variety of supervised-learning models to identify 20 iconic jazz musicians across a dataset of 84 hours of recordings.<n>Our models include a novel multi-input architecture that enables four musical domains (melody, harmony, rhythm, and dynamics) to be analysed separately.
arXiv Detail & Related papers (2025-04-07T12:37:39Z) - Enhancing Sequential Music Recommendation with Personalized Popularity Awareness [56.972624411205224]
This paper introduces a novel approach that incorporates personalized popularity information into sequential recommendation.
Experimental results demonstrate that a Personalized Most Popular recommender outperforms existing state-of-the-art models.
arXiv Detail & Related papers (2024-09-06T15:05:12Z) - Towards Explainable and Interpretable Musical Difficulty Estimation: A Parameter-efficient Approach [49.2787113554916]
Estimating music piece difficulty is important for organizing educational music collections.
Our work employs explainable descriptors for difficulty estimation in symbolic music representations.
Our approach, evaluated in piano repertoire categorized in 9 classes, achieved 41.4% accuracy independently, with a mean squared error (MSE) of 1.7.
arXiv Detail & Related papers (2024-08-01T11:23:42Z) - Music Era Recognition Using Supervised Contrastive Learning and Artist Information [11.126020721501956]
Music era information can be an important feature for playlist generation and recommendation.
An audio-based model is developed to predict the era from audio.
For the case where the artist information is available, we extend the audio-based model to take multimodal inputs and develop a framework, called MultiModal Contrastive (MMC) learning, to enhance the training.
arXiv Detail & Related papers (2024-07-07T13:43:55Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - MusicRL: Aligning Music Generation to Human Preferences [62.44903326718772]
MusicRL is the first music generation system finetuned from human feedback.
We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences.
We train MusicRL-U, the first text-to-music model that incorporates human feedback at scale.
arXiv Detail & Related papers (2024-02-06T18:36:52Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Music Genre Classification with ResNet and Bi-GRU Using Visual
Spectrograms [4.354842354272412]
The limitations of manual genre classification have highlighted the need for a more advanced system.
Traditional machine learning techniques have shown potential in genre classification, but fail to capture the full complexity of music data.
This study proposes a novel approach using visual spectrograms as input, and propose a hybrid model that combines the strength of the Residual neural Network (ResNet) and the Gated Recurrent Unit (GRU)
arXiv Detail & Related papers (2023-07-20T11:10:06Z) - Personalized Popular Music Generation Using Imitation and Structure [1.971709238332434]
We propose a statistical machine learning model that is able to capture and imitate the structure, melody, chord, and bass style from a given example seed song.
An evaluation using 10 pop songs shows that our new representations and methods are able to create high-quality stylistic music.
arXiv Detail & Related papers (2021-05-10T23:43:00Z) - Incorporating Music Knowledge in Continual Dataset Augmentation for
Music Generation [69.06413031969674]
Aug-Gen is a method of dataset augmentation for any music generation system trained on a resource-constrained domain.
We apply Aug-Gen to Transformer-based chorale generation in the style of J.S. Bach, and show that this allows for longer training and results in better generative output.
arXiv Detail & Related papers (2020-06-23T21:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.