Time Series Forecasting Using LSTM Networks: A Symbolic Approach
- URL: http://arxiv.org/abs/2003.05672v1
- Date: Thu, 12 Mar 2020 09:18:22 GMT
- Title: Time Series Forecasting Using LSTM Networks: A Symbolic Approach
- Authors: Steven Elsworth and Stefan G\"uttel
- Abstract summary: A combination of a recurrent neural network with a dimension-reducing symbolic representation is proposed and applied for the purpose of time series forecasting.
It is shown that the symbolic representation can help to alleviate some of the aforementioned problems and, in addition, might allow for faster training without sacrificing the forecast performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning methods trained on raw numerical time series data exhibit
fundamental limitations such as a high sensitivity to the hyper parameters and
even to the initialization of random weights. A combination of a recurrent
neural network with a dimension-reducing symbolic representation is proposed
and applied for the purpose of time series forecasting. It is shown that the
symbolic representation can help to alleviate some of the aforementioned
problems and, in addition, might allow for faster training without sacrificing
the forecast performance.
Related papers
- Quantized symbolic time series approximation [0.28675177318965045]
We present a new quantization-based ABBA symbolic approximation technique, QABBA.
QABBA exhibits improved storage efficiency while retaining the original speed and accuracy of symbolic reconstruction.
An application of QABBA with large language models (LLMs) for time series regression is also presented.
arXiv Detail & Related papers (2024-11-20T10:32:22Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Learning Wave Propagation with Attention-Based Convolutional Recurrent
Autoencoder Net [0.0]
We present an end-to-end attention-based convolutional recurrent autoencoder (AB-CRAN) network for data-driven modeling of wave propagation phenomena.
We employ a denoising-based convolutional autoencoder from the full-order snapshots given by time-dependent hyperbolic partial differential equations for wave propagation.
The attention-based sequence-to-sequence network increases the time-horizon of prediction by five times compared to the plain RNN-LSTM.
arXiv Detail & Related papers (2022-01-17T20:51:59Z) - Probabilistic Time Series Forecasting with Implicit Quantile Networks [0.7249731529275341]
We combine an autoregressive recurrent neural network to model temporal dynamics with Implicit Quantile Networks to learn a large class of distributions over a time-series target.
Our approach is favorable in terms of point-wise prediction accuracy as well as on estimating the underlying temporal distribution.
arXiv Detail & Related papers (2021-07-08T10:37:24Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Randomized Neural Networks for Forecasting Time Series with Multiple
Seasonality [0.0]
This work contributes to the development of neural forecasting models with novel randomization-based learning methods.
A pattern-based representation of time series makes the proposed approach useful for forecasting time series with multiple seasonality.
arXiv Detail & Related papers (2021-07-04T18:39:27Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - Predicting Training Time Without Training [120.92623395389255]
We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.
We leverage the fact that the training dynamics of a deep network during fine-tuning are well approximated by those of a linearized model.
We are able to predict the time it takes to fine-tune a model to a given loss without having to perform any training.
arXiv Detail & Related papers (2020-08-28T04:29:54Z) - A New Unified Deep Learning Approach with
Decomposition-Reconstruction-Ensemble Framework for Time Series Forecasting [15.871046608998995]
A new variational mode decomposition (VMD) based deep learning approach is proposed in this paper.
CNN is applied to learn the reconstruction patterns on the decomposed sub-signals to obtain several reconstructed sub-signals.
A long short term memory (LSTM) network is employed to forecast the time series with the decomposed sub-signals and the reconstructed sub-signals as inputs.
arXiv Detail & Related papers (2020-02-22T12:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.