Related Rhythms: Recommendation System To Discover Music You May Like
- URL: http://arxiv.org/abs/2309.13544v1
- Date: Sun, 24 Sep 2023 04:18:40 GMT
- Title: Related Rhythms: Recommendation System To Discover Music You May Like
- Authors: Rahul Singh and Pranav Kanuparthi
- Abstract summary: In this paper, a distributed Machine Learning pipeline is delineated, which is capable of taking a subset of songs as input and producing a new subset of songs identified as being similar to the inputted subset.
The publicly accessible Million Songs dataset (MSD) enables researchers to develop and explore reasonably efficient systems for audio track analysis and recommendations.
The objective of the proposed application is to leverage an ML system trained to optimally recommend songs that a user might like.
- Score: 2.7152798636894193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning models are being utilized extensively to drive recommender
systems, which is a widely explored topic today. This is especially true of the
music industry, where we are witnessing a surge in growth. Besides a large
chunk of active users, these systems are fueled by massive amounts of data.
These large-scale systems yield applications that aim to provide a better user
experience and to keep customers actively engaged. In this paper, a distributed
Machine Learning (ML) pipeline is delineated, which is capable of taking a
subset of songs as input and producing a new subset of songs identified as
being similar to the inputted subset. The publicly accessible Million Songs
Dataset (MSD) enables researchers to develop and explore reasonably efficient
systems for audio track analysis and recommendations, without having to access
a commercialized music platform. The objective of the proposed application is
to leverage an ML system trained to optimally recommend songs that a user might
like.
Related papers
- SoundSignature: What Type of Music Do You Like? [0.0]
SoundSignature is a music application that integrates a custom OpenAI Assistant to analyze users' favorite songs.
The system incorporates state-of-the-art Music Information Retrieval (MIR) Python packages to combine extracted acoustic/musical features with the assistant's extensive knowledge of the artists and bands.
arXiv Detail & Related papers (2024-10-04T12:40:45Z) - Music Genre Classification: Training an AI model [0.0]
Music genre classification is an area that utilizes machine learning models and techniques for the processing of audio signals.
In this research I explore various machine learning algorithms for the purpose of music genre classification, using features extracted from audio signals.
I aim to asses the robustness of machine learning models for genre classification, and to compare their results.
arXiv Detail & Related papers (2024-05-23T23:07:01Z) - Loop Copilot: Conducting AI Ensembles for Music Generation and Iterative Editing [10.159860910939686]
Loop Copilot is a novel system that enables users to generate and iteratively refine music through an interactive, multi-round dialogue interface.
The system uses a large language model to interpret user intentions and select appropriate AI models for task execution.
arXiv Detail & Related papers (2023-10-19T01:20:12Z) - MusicAgent: An AI Agent for Music Understanding and Generation with
Large Language Models [54.55063772090821]
MusicAgent integrates numerous music-related tools and an autonomous workflow to address user requirements.
The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect.
arXiv Detail & Related papers (2023-10-18T13:31:10Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Music Genre Classification with ResNet and Bi-GRU Using Visual
Spectrograms [4.354842354272412]
The limitations of manual genre classification have highlighted the need for a more advanced system.
Traditional machine learning techniques have shown potential in genre classification, but fail to capture the full complexity of music data.
This study proposes a novel approach using visual spectrograms as input, and propose a hybrid model that combines the strength of the Residual neural Network (ResNet) and the Gated Recurrent Unit (GRU)
arXiv Detail & Related papers (2023-07-20T11:10:06Z) - DISCO-10M: A Large-Scale Music Dataset [20.706469085872516]
We present DISCO-10M, a novel and extensive music dataset.
It surpasses the largest previously available music dataset by an order of magnitude.
We aim to democratize and facilitate new research to help advance the development of novel machine learning models for music.
arXiv Detail & Related papers (2023-06-23T14:27:14Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - Retrieval-Enhanced Machine Learning [110.5237983180089]
We describe a generic retrieval-enhanced machine learning framework, which includes a number of existing models as special cases.
REML challenges information retrieval conventions, presenting opportunities for novel advances in core areas, including optimization.
REML research agenda lays a foundation for a new style of information access research and paves a path towards advancing machine learning and artificial intelligence.
arXiv Detail & Related papers (2022-05-02T21:42:45Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Codified audio language modeling learns useful representations for music
information retrieval [77.63657430536593]
We show that language models pre-trained on codified (discretely-encoded) music audio learn representations that are useful for downstream MIR tasks.
To determine if Jukebox's representations contain useful information for MIR, we use them as input features to train shallow models on several MIR tasks.
We observe that representations from Jukebox are considerably stronger than those from models pre-trained on tagging, suggesting that pre-training via codified audio language modeling may address blind spots in conventional approaches.
arXiv Detail & Related papers (2021-07-12T18:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.