Review-Based Tip Generation for Music Songs
- URL: http://arxiv.org/abs/2205.06985v1
- Date: Sat, 14 May 2022 06:40:49 GMT
- Title: Review-Based Tip Generation for Music Songs
- Authors: Jingya Zang, Cuiyun Gao, Yupan Chen, Ruifeng Xu, Lanjun Zhou, Xuan
Wang
- Abstract summary: We propose a framework named GenTMS for automatically generating tips from song reviews.
The dataset involves 8,003 Chinese tips/non-tips from 128 songs.
Experiments show that GenTMS achieves top-10 precision at 85.56%, outperforming the baseline models by at least 3.34%.
- Score: 15.318127987849092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reviews of songs play an important role in online music service platforms.
Prior research shows that users can make quicker and more informed decisions
when presented with meaningful song reviews. However, reviews of music songs
are generally long in length and most of them are non-informative for users. It
is difficult for users to efficiently grasp meaningful messages for making
decisions. To solve this problem, one practical strategy is to provide tips,
i.e., short, concise, empathetic, and self-contained descriptions about songs.
Tips are produced from song reviews and should express non-trivial insight
about the songs. To the best of our knowledge, no prior studies have explored
the tip generation task in music domain. In this paper, we create a dataset
named MTips for the task and propose a framework named GenTMS for automatically
generating tips from song reviews. The dataset involves 8,003 Chinese
tips/non-tips from 128 songs which are distributed in five different song
genres. Experimental results show that GenTMS achieves top-10 precision at
85.56%, outperforming the baseline models by at least 3.34%. Besides, to
simulate the practical usage of our proposed framework, we also experiment with
previously-unseen songs, during which GenTMS also achieves the best performance
with top-10 precision at 78.89% on average. The results demonstrate the
effectiveness of the proposed framework in tip generation of the music domain.
Related papers
- Music Era Recognition Using Supervised Contrastive Learning and Artist Information [11.126020721501956]
Music era information can be an important feature for playlist generation and recommendation.
An audio-based model is developed to predict the era from audio.
For the case where the artist information is available, we extend the audio-based model to take multimodal inputs and develop a framework, called MultiModal Contrastive (MMC) learning, to enhance the training.
arXiv Detail & Related papers (2024-07-07T13:43:55Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - MusicRL: Aligning Music Generation to Human Preferences [62.44903326718772]
MusicRL is the first music generation system finetuned from human feedback.
We deploy MusicLM to users and collect a substantial dataset comprising 300,000 pairwise preferences.
We train MusicRL-U, the first text-to-music model that incorporates human feedback at scale.
arXiv Detail & Related papers (2024-02-06T18:36:52Z) - Music Recommendation on Spotify using Deep Learning [0.0]
Hosting about 50 million and 4 billion gigabytes, there is an enormous amount of data generated at Spotify every single day.
This paper aims to appropriate filtering using the approach of deep learning for maximum user likeability.
arXiv Detail & Related papers (2023-12-10T07:35:17Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - An Analysis of Classification Approaches for Hit Song Prediction using
Engineered Metadata Features with Lyrics and Audio Features [5.871032585001082]
This study aims to improve the prediction result of the top 10 hits among Billboard Hot 100 songs using more alternative metadata.
Five machine learning approaches are applied, including: k-nearest neighbours, Naive Bayes, Random Forest, Logistic Regression and Multilayer Perceptron.
Our results show that Random Forest (RF) and Logistic Regression (LR) with all features outperforms other models, achieving 89.1% and 87.2% accuracy, and 0.91 and 0.93 AUC, respectively.
arXiv Detail & Related papers (2023-01-31T09:48:53Z) - Melody transcription via generative pre-training [86.08508957229348]
Key challenge in melody transcription is building methods which can handle broad audio containing any number of instrument ensembles and musical styles.
To confront this challenge, we leverage representations from Jukebox (Dhariwal et al. 2020), a generative model of broad music audio.
We derive a new dataset containing $50$ hours of melody transcriptions from crowdsourced annotations of broad music.
arXiv Detail & Related papers (2022-12-04T18:09:23Z) - Video Background Music Generation: Dataset, Method and Evaluation [31.15901120245794]
We introduce a complete recipe including dataset, benchmark model, and evaluation metric for video background music generation.
We present SymMV, a video and symbolic music dataset with various musical annotations.
We also propose a benchmark video background music generation framework named V-MusProd.
arXiv Detail & Related papers (2022-11-21T08:39:48Z) - Context-Based Music Recommendation Algorithm Evaluation [0.0]
This paper explores 6 machine learning algorithms and their individual accuracy for predicting whether a user will like a song.
The algorithms explored include Logistic Regression, Naive Bayes, Sequential Minimal Optimization (SMO), Multilayer Perceptron (Neural Network), Nearest Neighbor, and Random Forest.
With the analysis of the specific characteristics of each song provided by the Spotify API, Random Forest is the most successful algorithm for predicting whether a user will like a song with an accuracy of 84%.
arXiv Detail & Related papers (2021-12-16T01:46:36Z) - MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training [97.91071692716406]
Symbolic music understanding refers to the understanding of music from the symbolic data.
MusicBERT is a large-scale pre-trained model for music understanding.
arXiv Detail & Related papers (2021-06-10T10:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.