Attention Sequence to Sequence Model for Machine Remaining Useful Life
Prediction
- URL: http://arxiv.org/abs/2007.09868v1
- Date: Mon, 20 Jul 2020 03:40:51 GMT
- Title: Attention Sequence to Sequence Model for Machine Remaining Useful Life
Prediction
- Authors: Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Ruqiang Yan,
and Xiaoli Li
- Abstract summary: We develop a novel attention-based sequence to sequence with auxiliary task (ATS2S) model.
We employ the attention mechanism to focus on all the important input information during training process.
Our proposed method can achieve superior performance over 13 state-of-the-art methods consistently.
- Score: 13.301585196004796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate estimation of remaining useful life (RUL) of industrial equipment
can enable advanced maintenance schedules, increase equipment availability and
reduce operational costs. However, existing deep learning methods for RUL
prediction are not completely successful due to the following two reasons.
First, relying on a single objective function to estimate the RUL will limit
the learned representations and thus affect the prediction accuracy. Second,
while longer sequences are more informative for modelling the sensor dynamics
of equipment, existing methods are less effective to deal with very long
sequences, as they mainly focus on the latest information. To address these two
problems, we develop a novel attention-based sequence to sequence with
auxiliary task (ATS2S) model. In particular, our model jointly optimizes both
reconstruction loss to empower our model with predictive capabilities (by
predicting next input sequence given current input sequence) and RUL prediction
loss to minimize the difference between the predicted RUL and actual RUL.
Furthermore, to better handle longer sequence, we employ the attention
mechanism to focus on all the important input information during training
process. Finally, we propose a new dual-latent feature representation to
integrate the encoder features and decoder hidden states, to capture rich
semantic information in data. We conduct extensive experiments on four real
datasets to evaluate the efficacy of the proposed method. Experimental results
show that our proposed method can achieve superior performance over 13
state-of-the-art methods consistently.
Related papers
- Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.
A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.
We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - Skeleton2vec: A Self-supervised Learning Framework with Contextualized
Target Representations for Skeleton Sequence [56.092059713922744]
We show that using high-level contextualized features as prediction targets can achieve superior performance.
Specifically, we propose Skeleton2vec, a simple and efficient self-supervised 3D action representation learning framework.
Our proposed Skeleton2vec outperforms previous methods and achieves state-of-the-art results.
arXiv Detail & Related papers (2024-01-01T12:08:35Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Multi-Dimensional Self Attention based Approach for Remaining Useful
Life Estimation [0.17205106391379021]
Remaining Useful Life (RUL) estimation plays a critical role in Prognostics and Health Management (PHM)
This paper carries out research into the remaining useful life prediction model for multi-sensor devices in the IIoT scenario.
A data-driven approach for RUL estimation is proposed in this paper.
arXiv Detail & Related papers (2022-12-12T08:50:27Z) - Checklist Models for Improved Output Fluency in Piano Fingering
Prediction [33.52847881359949]
We present a new approach for the task of predicting fingerings for piano music.
We put forward a checklist system, trained via reinforcement learning, that maintains a representation of recent predictions.
We demonstrate significant gains in performability directly attributable to improvements with respect to these metrics.
arXiv Detail & Related papers (2022-09-12T21:27:52Z) - Accurate Remaining Useful Life Prediction with Uncertainty
Quantification: a Deep Learning and Nonstationary Gaussian Process Approach [0.0]
Remaining useful life (RUL) refers to the expected remaining lifespan of a component or system.
We devise a highly accurate RUL prediction model with uncertainty quantification, which integrates and leverages the advantages of deep learning and nonstationary Gaussian process regression (DL-NSGPR)
Our computational experiments show that the DL-NSGPR predictions are highly accurate with root mean square error 1.7 to 6.2 times smaller than those of competing RUL models.
arXiv Detail & Related papers (2021-09-23T18:19:58Z) - Dual Aspect Self-Attention based on Transformer for Remaining Useful
Life Prediction [15.979729373555024]
We propose Dual Aspect Self-attention based on Transformer (DAST), a novel deep RUL prediction method.
DAST consists of two encoders, which work in parallel to simultaneously extract features of different sensors and time steps.
Experimental results on two real turbofan engine datasets show that our method significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-06-30T06:54:59Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.