DMIDAS: Deep Mixed Data Sampling Regression for Long Multi-Horizon Time
Series Forecasting
- URL: http://arxiv.org/abs/2106.05860v1
- Date: Mon, 7 Jun 2021 22:36:38 GMT
- Title: DMIDAS: Deep Mixed Data Sampling Regression for Long Multi-Horizon Time
Series Forecasting
- Authors: Cristian Challu, Kin G. Olivares, Gus Welter, Artur Dubrawski
- Abstract summary: We develop a method to predict long-term energy prices using high-frequency healthcare and electricity price data.
We improve the prediction accuracy by 5% over state-of-the-art models, reducing the number of parameters of NBEATS by nearly 70%.
- Score: 13.458489651961106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural forecasting has shown significant improvements in the accuracy of
large-scale systems, yet predicting extremely long horizons remains a
challenging task. Two common problems are the volatility of the predictions and
their computational complexity; we addressed them by incorporating smoothness
regularization and mixed data sampling techniques to a well-performing
multi-layer perceptron based architecture (NBEATS). We validate our proposed
method, DMIDAS, on high-frequency healthcare and electricity price data with
long forecasting horizons (~1000 timestamps) where we improve the prediction
accuracy by 5% over state-of-the-art models, reducing the number of parameters
of NBEATS by nearly 70%.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Long-term drought prediction using deep neural networks based on geospatial weather data [75.38539438000072]
High-quality drought forecasting up to a year in advance is critical for agriculture planning and insurance.
We tackle drought data by introducing an end-to-end approach that adopts a systematic end-to-end approach.
Key findings are the exceptional performance of a Transformer model, EarthFormer, in making accurate short-term (up to six months) forecasts.
arXiv Detail & Related papers (2023-09-12T13:28:06Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - cs-net: structural approach to time-series forecasting for
high-dimensional feature space data with limited observations [1.5533753199073637]
We propose a flexible data feature extraction technique that excels in high-dimensional multivariate forecasting tasks.
Our approach was originally developed for the National Science Foundation (NSF) Algorithms for Threat Detection (ATD) 2022 Challenge.
Our models trained on the GDELT dataset finished 1st and 2nd places in the ATD sprint series and hold promise for other datasets for time series forecasting.
arXiv Detail & Related papers (2022-12-05T19:46:47Z) - Beyond S-curves: Recurrent Neural Networks for Technology Forecasting [60.82125150951035]
We develop an autencoder approach that employs recent advances in machine learning and time series forecasting.
S-curves forecasts largely exhibit a mean average percentage error (MAPE) comparable to a simple ARIMA baseline.
Our autoencoder approach improves the MAPE by 13.5% on average over the second-best result.
arXiv Detail & Related papers (2022-11-28T14:16:22Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated Causal Convolutions [53.37679435230207]
We propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility.
Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting [17.53378788483556]
Two common challenges afflicting long-horizon forecasting are the volatility of the predictions and their computational complexity.
We introduce N-HiTS, a model which addresses both challenges by incorporating novel hierarchical and multi-rate data sampling techniques.
We conduct an empirical evaluation demonstrating the advantages of N-HiTS over the state-of-the-art long-horizon forecasting methods.
arXiv Detail & Related papers (2022-01-30T17:52:19Z) - A Statistics and Deep Learning Hybrid Method for Multivariate Time
Series Forecasting and Mortality Modeling [0.0]
Exponential Smoothing Recurrent Neural Network (ES-RNN) is a hybrid between a statistical forecasting model and a recurrent neural network variant.
ES-RNN achieves a 9.4% improvement in absolute error in the Makridakis-4 Forecasting Competition.
arXiv Detail & Related papers (2021-12-16T04:44:19Z) - Low-Rank Temporal Attention-Augmented Bilinear Network for financial
time-series forecasting [93.73198973454944]
Deep learning models have led to significant performance improvements in many problems coming from different domains, including prediction problems of financial time-series data.
The Temporal Attention-Augmented Bilinear network was recently proposed as an efficient and high-performing model for Limit Order Book time-series forecasting.
In this paper, we propose a low-rank tensor approximation of the model to further reduce the number of trainable parameters and increase its speed.
arXiv Detail & Related papers (2021-07-05T10:15:23Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.