An analysis of deep neural networks for predicting trends in time series
data
- URL: http://arxiv.org/abs/2009.07943v2
- Date: Tue, 22 Sep 2020 20:44:33 GMT
- Title: An analysis of deep neural networks for predicting trends in time series
data
- Authors: Kouame Hermann Kouassi and Deshendran Moodley
- Abstract summary: TreNet is a hybrid Deep Neural Network (DNN) algorithm for predicting trends in time series data.
We replicated the TreNet experiments on the same data sets using a walk-forward validation method and tested our optimal model over multiple independent runs to evaluate model stability.
We found that in general TreNet still performs better than the vanilla DNN models, but not on all data sets as reported in the original TreNet study.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, a hybrid Deep Neural Network (DNN) algorithm, TreNet was proposed
for predicting trends in time series data. While TreNet was shown to have
superior performance for trend prediction to other DNN and traditional ML
approaches, the validation method used did not take into account the sequential
nature of time series data sets and did not deal with model update. In this
research we replicated the TreNet experiments on the same data sets using a
walk-forward validation method and tested our optimal model over multiple
independent runs to evaluate model stability. We compared the performance of
the hybrid TreNet algorithm, on four data sets to vanilla DNN algorithms that
take in point data, and also to traditional ML algorithms. We found that in
general TreNet still performs better than the vanilla DNN models, but not on
all data sets as reported in the original TreNet study. This study highlights
the importance of using an appropriate validation method and evaluating model
stability for evaluating and developing machine learning models for trend
prediction in time series data.
Related papers
- Inferring Data Preconditions from Deep Learning Models for Trustworthy
Prediction in Deployment [25.527665632625627]
It is important to reason about the trustworthiness of the model's predictions with unseen data during deployment.
Existing methods for specifying and verifying traditional software are insufficient for this task.
We propose a novel technique that uses rules derived from neural network computations to infer data preconditions.
arXiv Detail & Related papers (2024-01-26T03:47:18Z) - SHAPNN: Shapley Value Regularized Tabular Neural Network [4.587122314291091]
We present SHAPNN, a novel deep data modeling architecture designed for supervised learning.
Our neural network is trained using standard backward propagation optimization methods, and is regularized with realtime estimated Shapley values.
We evaluate our method on various publicly available datasets and compare it with state-of-the-art deep neural network models.
arXiv Detail & Related papers (2023-09-15T22:45:05Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Evaluation of deep learning models for multi-step ahead time series
prediction [1.3764085113103222]
We present an evaluation study that compares the performance of deep learning models for multi-step ahead time series prediction.
Our deep learning methods compromise of simple recurrent neural networks, long short term memory (LSTM) networks, bidirectional LSTM, encoder-decoder LSTM networks, and convolutional neural networks.
arXiv Detail & Related papers (2021-03-26T04:07:11Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z) - Automatic deep learning for trend prediction in time series data [0.0]
Deep Neural Network (DNN) algorithms have been explored for predicting trends in time series data.
In many real world applications, time series data are captured from dynamic systems.
We show how a recent AutoML tool can be effectively used to automate the model development process.
arXiv Detail & Related papers (2020-09-17T19:47:05Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.