A novel Time-frequency Transformer and its Application in Fault
Diagnosis of Rolling Bearings
- URL: http://arxiv.org/abs/2104.09079v1
- Date: Mon, 19 Apr 2021 06:53:31 GMT
- Title: A novel Time-frequency Transformer and its Application in Fault
Diagnosis of Rolling Bearings
- Authors: Yifei Ding, Minping Jia, Qiuhua Miao, Yudong Cao
- Abstract summary: We propose a novel time-frequency Transformer (TFT) model inspired by the massive success of standard Transformer in sequence processing.
A new end-to-end fault diagnosis framework based on TFT is presented in this paper.
- Score: 0.24214594180459362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The scope of data-driven fault diagnosis models is greatly improved through
deep learning (DL). However, the classical convolution and recurrent structure
have their defects in computational efficiency and feature representation,
while the latest Transformer architecture based on attention mechanism has not
been applied in this field. To solve these problems, we propose a novel
time-frequency Transformer (TFT) model inspired by the massive success of
standard Transformer in sequence processing. Specially, we design a fresh
tokenizer and encoder module to extract effective abstractions from the
time-frequency representation (TFR) of vibration signals. On this basis, a new
end-to-end fault diagnosis framework based on time-frequency Transformer is
presented in this paper. Through the case studies on bearing experimental
datasets, we constructed the optimal Transformer structure and verified the
performance of the diagnostic method. The superiority of the proposed method is
demonstrated in comparison with the benchmark model and other state-of-the-art
methods.
Related papers
- Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Short-Time Fourier Transform for deblurring Variational Autoencoders [0.0]
Variational Autoencoders (VAEs) are powerful generative models.
Their generated samples are known to suffer from a characteristic blurriness, as compared to the outputs of alternative generating techniques.
arXiv Detail & Related papers (2024-01-06T08:57:11Z) - TSViT: A Time Series Vision Transformer for Fault Diagnosis [2.710064390178205]
This paper presents the Time Series Vision Transformer (TSViT) for effective fault diagnosis.
TSViT incorporates a convolutional layer to extract local features from vibration signals, alongside a transformer encoder to discern long-term temporal patterns.
Remarkably, TSViT achieves an unprecedented 100% average accuracy on two test sets and 99.99% on another.
arXiv Detail & Related papers (2023-11-12T18:16:48Z) - Diagnostic Spatio-temporal Transformer with Faithful Encoding [54.02712048973161]
This paper addresses the task of anomaly diagnosis when the underlying data generation process has a complex-temporal (ST) dependency.
We formalize the problem as supervised dependency discovery, where the ST dependency is learned as a side product of time-series classification.
We show that temporal positional encoding used in existing ST transformer works has a serious limitation capturing frequencies in higher frequencies (short time scales)
We also propose a new ST dependency discovery framework, which can provide readily consumable diagnostic information in both spatial and temporal directions.
arXiv Detail & Related papers (2023-05-26T05:31:23Z) - Full Stack Optimization of Transformer Inference: a Survey [58.55475772110702]
Transformer models achieve superior accuracy across a wide range of applications.
The amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate.
There has been an increased focus on making Transformer models more efficient.
arXiv Detail & Related papers (2023-02-27T18:18:13Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - A microstructure estimation Transformer inspired by sparse
representation for diffusion MRI [11.761543033212797]
We present a learning-based framework based on Transformer for dMRI-based microstructure estimation with downsampled q-space data.
The proposed method achieved up to 11.25 folds of acceleration in scan time and outperformed the other state-of-the-art learning-based methods.
arXiv Detail & Related papers (2022-05-13T05:14:22Z) - ETSformer: Exponential Smoothing Transformers for Time-series
Forecasting [35.76867542099019]
We propose ETSFormer, a novel time-series Transformer architecture, which exploits the principle of exponential smoothing in improving Transformers for time-series forecasting.
In particular, inspired by the classical exponential smoothing methods in time-series forecasting, we propose the novel exponential smoothing attention (ESA) and frequency attention (FA) to replace the self-attention mechanism in vanilla Transformers, thus improving both accuracy and efficiency.
arXiv Detail & Related papers (2022-02-03T02:50:44Z) - Applying the Transformer to Character-level Transduction [68.91664610425114]
The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks.
We show that with a large enough batch size, the transformer does indeed outperform recurrent models for character-level tasks.
arXiv Detail & Related papers (2020-05-20T17:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.