Forecasting Commodity Prices Using Long Short-Term Memory Neural
Networks
- URL: http://arxiv.org/abs/2101.03087v2
- Date: Fri, 15 Jan 2021 11:13:11 GMT
- Title: Forecasting Commodity Prices Using Long Short-Term Memory Neural
Networks
- Authors: Racine Ly, Fousseini Traore, Khadim Dia
- Abstract summary: This paper applies a recurrent neural network (RNN) method to forecast cotton and oil prices.
We show that machine learning methods fit reasonably well the data but do not outperform systematically classical methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper applies a recurrent neural network (RNN) method to forecast cotton
and oil prices. We show how these new tools from machine learning, particularly
Long-Short Term Memory (LSTM) models, complement traditional methods. Our
results show that machine learning methods fit reasonably well the data but do
not outperform systematically classical methods such as Autoregressive
Integrated Moving Average (ARIMA) models in terms of out of sample forecasts.
However, averaging the forecasts from the two type of models provide better
results compared to either method. Compared to the ARIMA and the LSTM, the Root
Mean Squared Error (RMSE) of the average forecast was 0.21 and 21.49 percent
lower respectively for cotton. For oil, the forecast averaging does not provide
improvements in terms of RMSE. We suggest using a forecast averaging method and
extending our analysis to a wide range of commodity prices.
Related papers
- F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Few-Shot Load Forecasting Under Data Scarcity in Smart Grids: A Meta-Learning Approach [0.18641315013048293]
This paper proposes adapting an established model-agnostic meta-learning algorithm for short-term load forecasting.
The proposed method can rapidly adapt and generalize within any unknown load time series of arbitrary length.
The proposed model is evaluated using a dataset of historical load consumption data from real-world consumers.
arXiv Detail & Related papers (2024-06-09T18:59:08Z) - Time Series Stock Price Forecasting Based on Genetic Algorithm (GA)-Long Short-Term Memory Network (LSTM) Optimization [0.0]
A time series algorithm based on Genetic Algorithm (GA) and Long Short-Term Memory Network (LSTM) is used to forecast stock prices effectively.
The results on the test set show that the time series algorithm optimized based on Genetic Algorithm (GA)-Long Short-Term Memory Network (LSTM) is able to accurately predict the stock prices.
arXiv Detail & Related papers (2024-05-06T04:04:27Z) - A Study on Stock Forecasting Using Deep Learning and Statistical Models [3.437407981636465]
This paper will review many deep learning algorithms for stock price forecasting. We use a record of s&p 500 index data for training and testing.
It will discuss various models, including the Auto regression integration moving average model, the Recurrent neural network model, the long short-term model, the convolutional neural network model, and the full convolutional neural network model.
arXiv Detail & Related papers (2024-02-08T16:45:01Z) - ExtremeCast: Boosting Extreme Value Prediction for Global Weather Forecast [57.6987191099507]
We introduce Exloss, a novel loss function that performs asymmetric optimization and highlights extreme values to obtain accurate extreme weather forecast.
We also introduce a training-free extreme value enhancement strategy named ExEnsemble, which increases the variance of pixel values and improves the forecast robustness.
Our solution can achieve state-of-the-art performance in extreme weather prediction, while maintaining the overall forecast accuracy comparable to the top medium-range forecast models.
arXiv Detail & Related papers (2024-02-02T10:34:13Z) - Forecasting inflation using disaggregates and machine learning [0.0]
We consider different disaggregation levels for inflation and employ a range of traditional time series techniques as well as linear and nonlinear machine learning (ML) models to deal with a larger number of predictors.
For many forecast horizons, the aggregation of disaggregated forecasts performs just as well survey-based expectations and models that generate forecasts using the aggregate directly.
Our results reinforce the benefits of using models in a data-rich environment for inflation forecasting, including aggregating disaggregated forecasts from ML techniques.
arXiv Detail & Related papers (2023-08-22T04:01:40Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated
Causal Convolutions [78.6363825307044]
We propose DeepVol, a model based on Dilated Causal Convolutions to forecast day-ahead volatility by using high-frequency data.
We show that the dilated convolutional filters are ideally suited to extract relevant information from intraday financial data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo
sampling [58.14878401145309]
We develop a novel approach to producing more sample-efficient estimators of expectations in the PL model.
We illustrate our findings both theoretically and empirically using real-world recommendation data from Amazon Music and the Yahoo learning-to-rank challenge.
arXiv Detail & Related papers (2022-05-12T11:15:47Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Deep Learning Approaches for Forecasting Strawberry Yields and Prices
Using Satellite Images and Station-Based Soil Parameters [2.3513645401551333]
We propose here an alternate approach based on deep learning algorithms for forecasting strawberry yields and prices in Santa Barbara county, California.
Building the proposed forecasting model comprises three stages: first, the station-based ensemble model (ATT-CNN-LSTM-SeriesNet_Ens) with its compound deep learning components.
Second, the remote sensing ensemble model (SIM_CNN-LSTM_Ens) is trained and tested using satellite images of the same county as input mapped to the same yields and prices as output.
Third, the forecasts of these two models are ensembled to have a final forecasted value
arXiv Detail & Related papers (2021-02-17T20:54:34Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.