Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone
- URL: http://arxiv.org/abs/2008.03961v1
- Date: Mon, 10 Aug 2020 08:34:20 GMT
- Title: Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone
- Authors: Yexu Zhou, Yuting Gao, Yiran Huang, Michael Hefenbrock, Till Riedel,
and Michael Beigl
- Abstract summary: We propose a new LSTM variant called embedded convolutional LSTM (E NeuralTM)
In ETM a group of different 1D convolutions is embedded into the LSTM structure. Through this, the temporal information is preserved between and within windows.
We show the superiority of our proposed ETM approach over the state-of-the-art approaches on several widely used benchmark data sets for RUL Estimation.
- Score: 5.927250637620123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An essential task in predictive maintenance is the prediction of the
Remaining Useful Life (RUL) through the analysis of multivariate time series.
Using the sliding window method, Convolutional Neural Network (CNN) and
conventional Recurrent Neural Network (RNN) approaches have produced impressive
results on this matter, due to their ability to learn optimized features.
However, sequence information is only partially modeled by CNN approaches. Due
to the flatten mechanism in conventional RNNs, like Long Short Term Memories
(LSTM), the temporal information within the window is not fully preserved. To
exploit the multi-level temporal information, many approaches are proposed
which combine CNN and RNN models. In this work, we propose a new LSTM variant
called embedded convolutional LSTM (ECLSTM). In ECLSTM a group of different 1D
convolutions is embedded into the LSTM structure. Through this, the temporal
information is preserved between and within windows. Since the hyper-parameters
of models require careful tuning, we also propose an automated prediction
framework based on the Bayesian optimization with hyperband optimizer, which
allows for efficient optimization of the network architecture. Finally, we show
the superiority of our proposed ECLSTM approach over the state-of-the-art
approaches on several widely used benchmark data sets for RUL Estimation.
Related papers
- Disentangling Structured Components: Towards Adaptive, Interpretable and
Scalable Time Series Forecasting [52.47493322446537]
We develop a adaptive, interpretable and scalable forecasting framework, which seeks to individually model each component of the spatial-temporal patterns.
SCNN works with a pre-defined generative process of MTS, which arithmetically characterizes the latent structure of the spatial-temporal patterns.
Extensive experiments are conducted to demonstrate that SCNN can achieve superior performance over state-of-the-art models on three real-world datasets.
arXiv Detail & Related papers (2023-05-22T13:39:44Z) - Towards Energy-Efficient, Low-Latency and Accurate Spiking LSTMs [1.7969777786551424]
Spiking Neural Networks (SNNs) have emerged as an attractive-temporal computing paradigm vision for complex tasks.
We propose an optimized spiking long short-term memory networks (LSTM) training framework that involves a novel.
rev-to-SNN conversion framework, followed by SNN training.
We evaluate our framework on sequential learning tasks including temporal M, Google Speech Commands (GSC) datasets, and UCI Smartphone on different LSTM architectures.
arXiv Detail & Related papers (2022-10-23T04:10:27Z) - Image Classification using Sequence of Pixels [3.04585143845864]
This study compares sequential image classification methods based on recurrent neural networks.
We describe methods based on Long-Short-Term memory(LSTM), bidirectional Long-Short-Term memory(BiLSTM) architectures, etc.
arXiv Detail & Related papers (2022-09-23T09:42:44Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Improving Time Series Classification Algorithms Using
Octave-Convolutional Layers [0.0]
We experimentally show that by substituting convolutions with OctConv, we significantly improve accuracy for time series classification tasks.
In addition, the updated ALSTM-OctFCN performs statistically the same as the top two time series classifers.
arXiv Detail & Related papers (2021-09-28T13:12:09Z) - Simulation of Open Quantum Dynamics with Bootstrap-Based Long Short-Term
Memory Recurrent Neural Network [0.0]
bootstrap method is applied in the LSTM-NN construction and prediction.
bootstrap-based LSTM-NN approach is a practical and powerful tool to propagate the long-time quantum dynamics of open systems.
arXiv Detail & Related papers (2021-08-03T05:58:54Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network
for Forecasting Network-wide Traffic State with Missing Values [23.504633202965376]
We focus on RNN-based models and attempt to reformulate the way to incorporate RNN and its variants into traffic prediction models.
A stacked bidirectional and unidirectional LSTM network architecture (SBU-LSTM) is proposed to assist the design of neural network structures for traffic state forecasting.
We also propose a data imputation mechanism in the LSTM structure (LSTM-I) by designing an imputation unit to infer missing values and assist traffic prediction.
arXiv Detail & Related papers (2020-05-24T00:17:15Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.