Extreme-Long-short Term Memory for Time-series Prediction
- URL: http://arxiv.org/abs/2210.08244v1
- Date: Sat, 15 Oct 2022 09:45:48 GMT
- Title: Extreme-Long-short Term Memory for Time-series Prediction
- Authors: Sida Xing, Feihu Han, Suiyang Khoo
- Abstract summary: Long Short-Term Memory (LSTM) is a new type of Recurrent Neural Networks (RNN)
In this paper, we propose an advanced LSTM algorithm, the Extreme Long Short-Term Memory (E-LSTM)
The new E-LSTM requires only 2 epochs to obtain the results of the 7th epoch traditional LSTM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of Long Short-Term Memory (LSTM) solves the problems of
vanishing gradient and exploding gradient in traditional Recurrent Neural
Networks (RNN). LSTM, as a new type of RNN, has been widely used in various
fields, such as text prediction, Wind Speed Forecast, depression prediction by
EEG signals, etc. The results show that improving the efficiency of LSTM can
help to improve the efficiency in other application areas.
In this paper, we proposed an advanced LSTM algorithm, the Extreme Long
Short-Term Memory (E-LSTM), which adds the inverse matrix part of Extreme
Learning Machine (ELM) as a new "gate" into the structure of LSTM. This "gate"
preprocess a portion of the data and involves the processed data in the cell
update of the LSTM to obtain more accurate data with fewer training rounds,
thus reducing the overall training time.
In this research, the E-LSTM model is used for the text prediction task.
Experimental results showed that the E-LSTM sometimes takes longer to perform a
single training round, but when tested on a small data set, the new E-LSTM
requires only 2 epochs to obtain the results of the 7th epoch traditional LSTM.
Therefore, the E-LSTM retains the high accuracy of the traditional LSTM, whilst
also improving the training speed and the overall efficiency of the LSTM.
Related papers
- VecLSTM: Trajectory Data Processing and Management for Activity Recognition through LSTM Vectorization and Database Integration [1.1701842638497677]
VecLSTM is a novel framework that enhances the performance and efficiency of LSTM-based neural networks.
VecLSTM incorporates vectorization layers, leveraging optimized mathematical operations to process input sequences more efficiently.
arXiv Detail & Related papers (2024-09-28T06:22:44Z) - Unlocking the Power of LSTM for Long Term Time Series Forecasting [27.245021350821638]
We propose a simple yet efficient algorithm named P-sLSTM built upon sLSTM by incorporating patching and channel independence.
These modifications substantially enhance sLSTM's performance in TSF, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-08-19T13:59:26Z) - Beam Prediction based on Large Language Models [51.45077318268427]
Millimeter-wave (mmWave) communication is promising for next-generation wireless networks but suffers from significant path loss.
Traditional deep learning models, such as long short-term memory (LSTM), enhance beam tracking accuracy however are limited by poor robustness and generalization.
In this letter, we use large language models (LLMs) to improve the robustness of beam prediction.
arXiv Detail & Related papers (2024-08-16T12:40:01Z) - Kernel Corrector LSTM [1.034961673489652]
We propose a new RW-ML algorithm, Kernel Corrector LSTM (KcLSTM), that replaces the meta-learner of cLSTM with a simpler method: Kernel Smoothing.
We empirically evaluate the forecasting accuracy and the training time of the new algorithm and compare it with cLSTM and LSTM.
arXiv Detail & Related papers (2024-04-28T18:44:10Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Working Memory Connections for LSTM [51.742526187978726]
We show that Working Memory Connections constantly improve the performance of LSTMs on a variety of tasks.
Numerical results suggest that the cell state contains useful information that is worth including in the gate structure.
arXiv Detail & Related papers (2021-08-31T18:01:30Z) - Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone [5.927250637620123]
We propose a new LSTM variant called embedded convolutional LSTM (E NeuralTM)
In ETM a group of different 1D convolutions is embedded into the LSTM structure. Through this, the temporal information is preserved between and within windows.
We show the superiority of our proposed ETM approach over the state-of-the-art approaches on several widely used benchmark data sets for RUL Estimation.
arXiv Detail & Related papers (2020-08-10T08:34:20Z) - Future Vector Enhanced LSTM Language Model for LVCSR [67.03726018635174]
This paper proposes a novel enhanced long short-term memory (LSTM) LM using the future vector.
Experiments show that, the proposed new LSTM LM gets a better result on BLEU scores for long term sequence prediction.
Rescoring using both the new and conventional LSTM LMs can achieve a very large improvement on the word error rate.
arXiv Detail & Related papers (2020-07-31T08:38:56Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Sentiment Analysis Using Simplified Long Short-term Memory Recurrent
Neural Networks [1.5146765382501612]
We perform sentiment analysis on a GOP Debate Twitter dataset.
To speed up training and reduce the computational cost and time, six different parameter reduced slim versions of the LSTM model are proposed.
arXiv Detail & Related papers (2020-05-08T12:50:10Z) - High-Accuracy and Low-Latency Speech Recognition with Two-Head
Contextual Layer Trajectory LSTM Model [46.34788932277904]
We improve conventional hybrid LSTM acoustic models for high-accuracy and low-latency automatic speech recognition.
To achieve high accuracy, we use a contextual layer trajectory LSTM (cltLSTM), which decouples the temporal modeling and target classification tasks.
We further improve the training strategy with sequence-level teacher-student learning.
arXiv Detail & Related papers (2020-03-17T00:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.