Text2Playlist: Generating Personalized Playlists from Text on Deezer
- URL: http://arxiv.org/abs/2501.05894v1
- Date: Fri, 10 Jan 2025 11:46:51 GMT
- Title: Text2Playlist: Generating Personalized Playlists from Text on Deezer
- Authors: Mathieu Delcluze, Antoine Khoury, Clémence Vast, Valerio Arnaudo, Léa Briand, Walid Bendada, Thomas Bouabça,
- Abstract summary: Text2Playlist is a stand-alone tool for Deezer.<n>It generates query-specific and personalized playlists, successfully deployed at scale.
- Score: 1.5558822250482192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The streaming service Deezer heavily relies on the search to help users navigate through its extensive music catalog. Nonetheless, it is primarily designed to find specific items and does not lead directly to a smooth listening experience. We present Text2Playlist, a stand-alone tool that addresses these limitations. Text2Playlist leverages generative AI, music information retrieval and recommendation systems to generate query-specific and personalized playlists, successfully deployed at scale.
Related papers
- Just Ask for Music (JAM): Multimodal and Personalized Natural Language Music Recommendation [47.05078668091976]
We present JAM (Just Ask for Music), a lightweight and intuitive framework for natural language music recommendation.<n>To capture the complexity of music and user intent, JAM aggregates multimodal item features via cross-attention and sparse mixture-of-experts.<n>Our results show that JAM provides accurate recommendations, produces intuitive representations suitable for practical use cases, and can be easily integrated with existing music recommendation stacks.
arXiv Detail & Related papers (2025-07-21T17:36:03Z) - Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval [8.439626984193591]
We propose to address the task of prompt-based music recommendation as a generative retrieval task.
We introduce Text2Tracks, a generative retrieval model that learns a mapping from a user's music recommendation prompt to the relevant track IDs directly.
arXiv Detail & Related papers (2025-03-31T15:09:19Z) - SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation [75.86473375730392]
SongGen is a fully open-source, single-stage auto-regressive transformer for controllable song generation.
It supports two output modes: mixed mode, which generates a mixture of vocals and accompaniment directly, and dual-track mode, which synthesizes them separately.
To foster community engagement and future research, we will release our model weights, training code, annotated data, and preprocessing pipeline.
arXiv Detail & Related papers (2025-02-18T18:52:21Z) - SongCreator: Lyrics-based Universal Song Generation [53.248473603201916]
SongCreator is a song-generation system designed to tackle the challenge of generating songs with both vocals and accompaniment given lyrics.
The model features two novel designs: a meticulously designed dual-sequence language model (M) to capture the information of vocals and accompaniment for song generation, and a series of attention mask strategies for DSLM.
Experiments demonstrate the effectiveness of SongCreator by achieving state-of-the-art or competitive performances on all eight tasks.
arXiv Detail & Related papers (2024-09-09T19:37:07Z) - LARP: Language Audio Relational Pre-training for Cold-Start Playlist Continuation [49.89372182441713]
We introduce LARP, a multi-modal cold-start playlist continuation model.
Our framework uses increasing stages of task-specific abstraction: within-track (language-audio) contrastive loss, track-track contrastive loss, and track-playlist contrastive loss.
arXiv Detail & Related papers (2024-06-20T14:02:15Z) - MusicAgent: An AI Agent for Music Understanding and Generation with
Large Language Models [54.55063772090821]
MusicAgent integrates numerous music-related tools and an autonomous workflow to address user requirements.
The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect.
arXiv Detail & Related papers (2023-10-18T13:31:10Z) - IteraTTA: An interface for exploring both text prompts and audio priors
in generating music with text-to-audio models [40.798454815430034]
IteraTTA is designed to aid users in refining text prompts and selecting favorable audio priors from the generated audios.
Our implementation and discussions highlight design considerations that are specifically required for text-to-audio models.
arXiv Detail & Related papers (2023-07-24T11:00:01Z) - Music Playlist Title Generation Using Artist Information [4.201869316472344]
We present an encoder-decoder model that generates a playlist title from a sequence of music tracks.
Comparing the track IDs and artist IDs as input sequences, we show that the artist-based approach significantly enhances the performance in terms of word overlap, semantic relevance, and diversity.
arXiv Detail & Related papers (2023-01-14T00:19:39Z) - Exploiting Device and Audio Data to Tag Music with User-Aware Listening
Contexts [8.224040855079176]
We propose a system which can generate a situational playlist for a user at a certain time by leveraging user-aware music autotaggers.
Experiments show that such a context-aware personalized music retrieval system is feasible, but the performance decreases in the case of new users.
arXiv Detail & Related papers (2022-11-14T10:08:12Z) - Youling: an AI-Assisted Lyrics Creation System [72.00418962906083]
This paper demonstrates textitYouling, an AI-assisted lyrics creation system, designed to collaborate with music creators.
In the lyrics generation process, textitYouling supports traditional one pass full-text generation mode as well as an interactive generation mode.
The system also provides a revision module which enables users to revise undesired sentences or words of lyrics repeatedly.
arXiv Detail & Related papers (2022-01-18T03:57:04Z) - Automatic Embedding of Stories Into Collections of Independent Media [5.188557858279645]
We look at how machine learning techniques can be used to automatically embed stories into collections of independent media.
We use models that extract the tempo of songs to make a music playlist follow a narrative arc.
arXiv Detail & Related papers (2021-11-03T13:36:47Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Quick Lists: Enriched Playlist Embeddings for Future Playlist
Recommendation [0.0]
We present a novel method for generating playlist embeddings that are invariant to playlist length and sensitive to local and global track ordering.
The embeddings also capture information about playlist sequencing, and are enriched with side information about the playlist user.
arXiv Detail & Related papers (2020-06-17T17:08:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.