Time Series Analysis of Blockchain-Based Cryptocurrency Price Changes
- URL: http://arxiv.org/abs/2202.13874v1
- Date: Sat, 19 Feb 2022 04:28:07 GMT
- Title: Time Series Analysis of Blockchain-Based Cryptocurrency Price Changes
- Authors: Jacques Fleischer and Gregor von Laszewski and Carlos Theran and Yohn
Jairo Parra Bautista
- Abstract summary: We apply AI to historical records of high-risk cryptocurrency coins to train a prediction model that guesses their price.
The model is trained using three layers -- an LSTM, dropout, and dense layer-minimizing the loss through 50 epochs of training.
Finally, the notebook plots a line graph of the actual currency price in red and the predicted price in blue.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we apply neural networks and Artificial Intelligence (AI) to
historical records of high-risk cryptocurrency coins to train a prediction
model that guesses their price. This paper's code contains Jupyter notebooks,
one of which outputs a timeseries graph of any cryptocurrency price once a CSV
file of the historical data is inputted into the program. Another Jupyter
notebook trains an LSTM, or a long short-term memory model, to predict a
cryptocurrency's closing price. The LSTM is fed the close price, which is the
price that the currency has at the end of the day, so it can learn from those
values. The notebook creates two sets: a training set and a test set to assess
the accuracy of the results.
The data is then normalized using manual min-max scaling so that the model
does not experience any bias; this also enhances the performance of the model.
Then, the model is trained using three layers -- an LSTM, dropout, and dense
layer-minimizing the loss through 50 epochs of training; from this training, a
recurrent neural network (RNN) is produced and fitted to the training set.
Additionally, a graph of the loss over each epoch is produced, with the loss
minimizing over time. Finally, the notebook plots a line graph of the actual
currency price in red and the predicted price in blue. The process is then
repeated for several more cryptocurrencies to compare prediction models. The
parameters for the LSTM, such as number of epochs and batch size, are tweaked
to try and minimize the root mean square error.
Related papers
- Patch-Level Training for Large Language Models [69.67438563485887]
This paper introduces patch-level training for Large Language Models (LLMs)
During patch-level training, we feed the language model shorter sequences of patches and train it to predict the next patch.
Following this, the model continues token-level training on the remaining training data to align with the inference mode.
arXiv Detail & Related papers (2024-07-17T15:48:39Z) - Review of deep learning models for crypto price prediction: implementation and evaluation [5.240745112593501]
We review the literature about deep learning for cryptocurrency price forecasting and evaluate novel deep learning models for cryptocurrency stock price prediction.
Our deep learning models include variants of long short-term memory (LSTM) recurrent neural networks, variants of convolutional neural networks (CNNs) and the Transformer model.
We also carry out volatility analysis on the four cryptocurrencies which reveals significant fluctuations in their prices throughout the COVID-19 pandemic.
arXiv Detail & Related papers (2024-05-19T03:15:27Z) - Language models scale reliably with over-training and on downstream tasks [121.69867718185125]
Scaling laws are useful guides for derisking expensive training runs.
However, there remain gaps between current studies and how language models are trained.
In contrast, scaling laws mostly predict loss on inference, but models are usually compared on downstream task performance.
arXiv Detail & Related papers (2024-03-13T13:54:00Z) - A Study on Stock Forecasting Using Deep Learning and Statistical Models [3.437407981636465]
This paper will review many deep learning algorithms for stock price forecasting. We use a record of s&p 500 index data for training and testing.
It will discuss various models, including the Auto regression integration moving average model, the Recurrent neural network model, the long short-term model, the convolutional neural network model, and the full convolutional neural network model.
arXiv Detail & Related papers (2024-02-08T16:45:01Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - Application of Convolutional Neural Networks with Quasi-Reversibility
Method Results for Option Forecasting [11.730033307068405]
We create and evaluate new empirical mathematical models for the Black-Scholes equation to analyze data for 92,846 companies.
We solve the Black-Scholes (BS) equation forwards in time as an ill-posed inverse problem, using the Quasi-Reversibility Method (QRM) to predict option price for the future one day.
The current stage of research combines QRM with Convolutional Neural Networks (CNN), which learn information across a large number of data points simultaneously.
arXiv Detail & Related papers (2022-08-25T04:08:59Z) - Datamodels: Predicting Predictions from Training Data [86.66720175866415]
We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data.
We show that even simple linear datamodels can successfully predict model outputs.
arXiv Detail & Related papers (2022-02-01T18:15:24Z) - Wasserstein GAN: Deep Generation applied on Bitcoins financial time
series [0.0]
We introduce in this paper a deep neural network called the WGAN-GP, a data-driven model that focuses on sample generation.
The WGAN-GP is supposed to learn the underlying structure of the input data, which in our case, is the Bitcoin.
The generated synthetic time series are visually indistinguishable from the real data.
arXiv Detail & Related papers (2021-07-13T11:59:05Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Forecasting Bitcoin closing price series using linear regression and
neural networks models [4.17510581764131]
We study how to forecast daily closing price series of Bitcoin using data prices and volumes of prior days.
We followed different approaches in parallel, implementing both statistical techniques and machine learning algorithms.
arXiv Detail & Related papers (2020-01-04T21:04:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.