LSTM-RPA: A Simple but Effective Long Sequence Prediction Algorithm for
Music Popularity Prediction
- URL: http://arxiv.org/abs/2110.15790v1
- Date: Wed, 27 Oct 2021 08:59:09 GMT
- Title: LSTM-RPA: A Simple but Effective Long Sequence Prediction Algorithm for
Music Popularity Prediction
- Authors: Kun Li, Meng Li, Yanling Li, and Min Lin
- Abstract summary: Researchers could predict the trend of popular songs accurately by analyzing this data.
Traditional trend prediction models can better predict the short trend than the long trend.
We propose the improved LSTM Rolling Prediction Algorithm (LSTM-RPA)
- Score: 19.003312140124706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The big data about music history contains information about time and users'
behavior. Researchers could predict the trend of popular songs accurately by
analyzing this data. The traditional trend prediction models can better predict
the short trend than the long trend. In this paper, we proposed the improved
LSTM Rolling Prediction Algorithm (LSTM-RPA), which combines LSTM historical
input with current prediction results as model input for next time prediction.
Meanwhile, this algorithm converts the long trend prediction task into multiple
short trend prediction tasks. The evaluation results show that the LSTM-RPA
model increased F score by 13.03%, 16.74%, 11.91%, 18.52%, compared with LSTM,
BiLSTM, GRU and RNN. And our method outperforms tradi-tional sequence models,
which are ARIMA and SMA, by 10.67% and 3.43% improvement in F score.Code:
https://github.com/maliaosaide/lstm-rpa
Related papers
- Predicting Emergent Capabilities by Finetuning [98.9684114851891]
We find that finetuning language models can shift the point in scaling at which emergence occurs towards less capable models.
We validate this approach using four standard NLP benchmarks.
We find that, in some cases, we can accurately predict whether models trained with up to 4x more compute have emerged.
arXiv Detail & Related papers (2024-11-25T01:48:09Z) - Time Series Stock Price Forecasting Based on Genetic Algorithm (GA)-Long Short-Term Memory Network (LSTM) Optimization [0.0]
A time series algorithm based on Genetic Algorithm (GA) and Long Short-Term Memory Network (LSTM) is used to forecast stock prices effectively.
The results on the test set show that the time series algorithm optimized based on Genetic Algorithm (GA)-Long Short-Term Memory Network (LSTM) is able to accurately predict the stock prices.
arXiv Detail & Related papers (2024-05-06T04:04:27Z) - Kernel Corrector LSTM [1.034961673489652]
We propose a new RW-ML algorithm, Kernel Corrector LSTM (KcLSTM), that replaces the meta-learner of cLSTM with a simpler method: Kernel Smoothing.
We empirically evaluate the forecasting accuracy and the training time of the new algorithm and compare it with cLSTM and LSTM.
arXiv Detail & Related papers (2024-04-28T18:44:10Z) - Human trajectory prediction using LSTM with Attention mechanism [0.0]
We use attention scores to determine which parts of the input data the model should focus on when making predictions.
We show that our modified algorithm performs better than the Social LSTM in predicting the future trajectory of pedestrians in crowded spaces.
arXiv Detail & Related papers (2023-09-01T08:35:24Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Process Outcome Prediction: CNN vs. LSTM (with Attention) [0.15229257192293202]
We study the performance of Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) on time series problems.
Our findings show that all these neural networks achieve satisfactory to high predictive power.
We argue that CNNs' speed, early predictive power and robustness should pave the way for their application in process outcome prediction.
arXiv Detail & Related papers (2021-04-14T15:38:32Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - Future Vector Enhanced LSTM Language Model for LVCSR [67.03726018635174]
This paper proposes a novel enhanced long short-term memory (LSTM) LM using the future vector.
Experiments show that, the proposed new LSTM LM gets a better result on BLEU scores for long term sequence prediction.
Rescoring using both the new and conventional LSTM LMs can achieve a very large improvement on the word error rate.
arXiv Detail & Related papers (2020-07-31T08:38:56Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z) - COVID-19 growth prediction using multivariate long short term memory [2.588973722689844]
We use long short-term memory (LSTM) method to learn the correlation of COVID-19 growth over time.
First, we trained training data containing confirmed cases from around the globe.
We achieved favorable performance compared with that of the recurrent neural network (RNN) method with a comparable low validation error.
arXiv Detail & Related papers (2020-05-10T23:21:19Z) - ProphetNet: Predicting Future N-gram for Sequence-to-Sequence
Pre-training [85.35910219651572]
We present a new sequence-to-sequence pre-training model called ProphetNet.
It introduces a novel self-supervised objective named future n-gram prediction.
We conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks.
arXiv Detail & Related papers (2020-01-13T05:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.