Dual Aspect Self-Attention based on Transformer for Remaining Useful
Life Prediction
- URL: http://arxiv.org/abs/2106.15842v1
- Date: Wed, 30 Jun 2021 06:54:59 GMT
- Title: Dual Aspect Self-Attention based on Transformer for Remaining Useful
Life Prediction
- Authors: Zhizheng Zhang, Wen Song, Qiqiang Li
- Abstract summary: We propose Dual Aspect Self-attention based on Transformer (DAST), a novel deep RUL prediction method.
DAST consists of two encoders, which work in parallel to simultaneously extract features of different sensors and time steps.
Experimental results on two real turbofan engine datasets show that our method significantly outperforms state-of-the-art methods.
- Score: 15.979729373555024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remaining useful life prediction (RUL) is one of the key technologies of
condition-based maintenance, which is important to maintain the reliability and
safety of industrial equipments. While deep learning has achieved great success
in RUL prediction, existing methods have difficulties in processing long
sequences and extracting information from the sensor and time step aspects. In
this paper, we propose Dual Aspect Self-attention based on Transformer (DAST),
a novel deep RUL prediction method. DAST consists of two encoders, which work
in parallel to simultaneously extract features of different sensors and time
steps. Solely based on self-attention, the DAST encoders are more effective in
processing long data sequences, and are capable of adaptively learning to focus
on more important parts of input. Moreover, the parallel feature extraction
design avoids mutual influence of information from two aspects. Experimental
results on two real turbofan engine datasets show that our method significantly
outperforms state-of-the-art methods.
Related papers
- SeaDATE: Remedy Dual-Attention Transformer with Semantic Alignment via Contrast Learning for Multimodal Object Detection [18.090706979440334]
Multimodal object detection leverages diverse modal information to enhance the accuracy and robustness of detectors.
Current methods merely stack Transformer-guided fusion techniques without exploring their capability to extract features at various depth layers of network.
In this paper, we introduce an accurate and efficient object detection method named SeaDATE.
arXiv Detail & Related papers (2024-10-15T07:26:39Z) - DAPE V2: Process Attention Score as Feature Map for Length Extrapolation [63.87956583202729]
We conceptualize attention as a feature map and apply the convolution operator to mimic the processing methods in computer vision.
The novel insight, which can be adapted to various attention-related models, reveals that the current Transformer architecture has the potential for further evolution.
arXiv Detail & Related papers (2024-10-07T07:21:49Z) - State Sequences Prediction via Fourier Transform for Representation
Learning [111.82376793413746]
We propose State Sequences Prediction via Fourier Transform (SPF), a novel method for learning expressive representations efficiently.
We theoretically analyze the existence of structural information in state sequences, which is closely related to policy performance and signal regularity.
Experiments demonstrate that the proposed method outperforms several state-of-the-art algorithms in terms of both sample efficiency and performance.
arXiv Detail & Related papers (2023-10-24T14:47:02Z) - Exploring the Benefits of Differentially Private Pre-training and
Parameter-Efficient Fine-tuning for Table Transformers [56.00476706550681]
Table Transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy.
In this paper, we explore the benefits of combining these two aspects together in the scenario of transfer learning.
arXiv Detail & Related papers (2023-09-12T19:08:26Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - Multi-Dimensional Self Attention based Approach for Remaining Useful
Life Estimation [0.17205106391379021]
Remaining Useful Life (RUL) estimation plays a critical role in Prognostics and Health Management (PHM)
This paper carries out research into the remaining useful life prediction model for multi-sensor devices in the IIoT scenario.
A data-driven approach for RUL estimation is proposed in this paper.
arXiv Detail & Related papers (2022-12-12T08:50:27Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Transfer Learning for Autonomous Chatter Detection in Machining [0.9281671380673306]
Large-amplitude chatter vibrations are one of the most important phenomena in machining processes.
Three challenges can be identified in applying machine learning for chatter detection at large in industry.
These three challenges can be grouped under the umbrella of transfer learning.
arXiv Detail & Related papers (2022-04-11T20:46:06Z) - On Transfer Learning of Traditional Frequency and Time Domain Features
in Turning [1.0965065178451106]
We use traditional signal processing tools to identify chatter in accelerometer signals obtained from a turning experiment.
The tagged signals are then used to train a classifier.
Our results show that features extracted from the Fourier spectrum are the most informative when training a classifier and testing on data from the same cutting configuration.
arXiv Detail & Related papers (2020-08-28T14:47:57Z) - Attention Sequence to Sequence Model for Machine Remaining Useful Life
Prediction [13.301585196004796]
We develop a novel attention-based sequence to sequence with auxiliary task (ATS2S) model.
We employ the attention mechanism to focus on all the important input information during training process.
Our proposed method can achieve superior performance over 13 state-of-the-art methods consistently.
arXiv Detail & Related papers (2020-07-20T03:40:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.