TSViT: A Time Series Vision Transformer for Fault Diagnosis
- URL: http://arxiv.org/abs/2311.06916v2
- Date: Sun, 13 Oct 2024 17:13:28 GMT
- Title: TSViT: A Time Series Vision Transformer for Fault Diagnosis
- Authors: Shouhua Zhang, Jiehan Zhou, Xue Ma, Susanna Pirttikangas, Chunsheng Yang,
- Abstract summary: This paper presents the Time Series Vision Transformer (TSViT) for effective fault diagnosis.
TSViT incorporates a convolutional layer to extract local features from vibration signals, alongside a transformer encoder to discern long-term temporal patterns.
Remarkably, TSViT achieves an unprecedented 100% average accuracy on two test sets and 99.99% on another.
- Score: 2.710064390178205
- License:
- Abstract: Traditional fault diagnosis methods using Convolutional Neural Networks (CNNs) often struggle with capturing the temporal dynamics of vibration signals. To overcome this, the application of Transformer-based Vision Transformer (ViT) methods to fault diagnosis is gaining attraction. Nonetheless, these methods typically require extensive preprocessing, which increases computational complexity, potentially reducing the efficiency of the diagnosis process. Addressing this gap, this paper presents the Time Series Vision Transformer (TSViT), tailored for effective fault diagnosis. TSViT incorporates a convolutional layer to extract local features from vibration signals, alongside a transformer encoder to discern long-term temporal patterns. A thorough experimental comparison on three diverse datasets demonstrates TSViT's effectiveness and adaptability. Moreover, the paper delves into the influence of hyperparameter tuning on the model's performance, computational demand, and parameter count. Remarkably, TSViT achieves an unprecedented 100% average accuracy on two test sets and 99.99% on another, showcasing its exceptional diagnostic capabilities.
Related papers
- Visual Fourier Prompt Tuning [63.66866445034855]
We propose the Visual Fourier Prompt Tuning (VFPT) method as a general and effective solution for adapting large-scale transformer-based models.
Our approach incorporates the Fast Fourier Transform into prompt embeddings and harmoniously considers both spatial and frequency domain information.
Our results demonstrate that our approach outperforms current state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2024-11-02T18:18:35Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - TDANet: A Novel Temporal Denoise Convolutional Neural Network With Attention for Fault Diagnosis [0.5277756703318045]
This paper proposes the Temporal Denoise Convolutional Neural Network With Attention (TDANet) to improve fault diagnosis performance in noise environments.
The TDANet model transforms one-dimensional signals into two-dimensional tensors based on their periodic properties, employing multi-scale 2D convolution kernels to extract signal information both within and across periods.
Evaluation on two datasets, CWRU (single sensor) and Real aircraft sensor fault (multiple sensors), demonstrates that the TDANet model significantly outperforms existing deep learning approaches in terms of diagnostic accuracy under noisy environments.
arXiv Detail & Related papers (2024-03-29T02:54:41Z) - Diagnostic Spatio-temporal Transformer with Faithful Encoding [54.02712048973161]
This paper addresses the task of anomaly diagnosis when the underlying data generation process has a complex-temporal (ST) dependency.
We formalize the problem as supervised dependency discovery, where the ST dependency is learned as a side product of time-series classification.
We show that temporal positional encoding used in existing ST transformer works has a serious limitation capturing frequencies in higher frequencies (short time scales)
We also propose a new ST dependency discovery framework, which can provide readily consumable diagnostic information in both spatial and temporal directions.
arXiv Detail & Related papers (2023-05-26T05:31:23Z) - Power Quality Event Recognition and Classification Using an Online
Sequential Extreme Learning Machine Network based on Wavelets [0.0]
This study implements and tests a prototype of an Online Sequential Extreme Learning Machine (OS-ELM) classifier based on wavelets for detecting power quality problems under transient conditions.
Several types of transient events were used to demonstrate the classifier's ability to detect and categorize various types of power disturbances.
arXiv Detail & Related papers (2022-12-27T06:33:46Z) - Fault diagnosis for three-phase PWM rectifier based on deep feedforward
network with transient synthetic features [0.0]
A fault diagnosis method based on deep feedforward network with transient synthetic features is proposed.
The average fault diagnosis accuracy can reach 97.85% for transient synthetic fault data.
Online fault diagnosis experiments show that the method can accurately locate the fault IGBTs.
arXiv Detail & Related papers (2022-11-01T02:32:20Z) - Q-ViT: Accurate and Fully Quantized Low-bit Vision Transformer [56.87383229709899]
We develop an information rectification module (IRM) and a distribution guided distillation scheme for fully quantized vision transformers (Q-ViT)
Our method achieves a much better performance than the prior arts.
arXiv Detail & Related papers (2022-10-13T04:00:29Z) - DisCoVQA: Temporal Distortion-Content Transformers for Video Quality
Assessment [56.42140467085586]
Some temporal variations are causing temporal distortions and lead to extra quality degradations.
Human visual system often has different attention to frames with different contents.
We propose a novel and effective transformer-based VQA method to tackle these two issues.
arXiv Detail & Related papers (2022-06-20T15:31:27Z) - TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate
Time Series Data [13.864161788250856]
TranAD is a deep transformer network based anomaly detection and diagnosis model.
It uses attention-based sequence encoders to swiftly perform inference with the knowledge of the broader temporal trends in the data.
TranAD can outperform state-of-the-art baseline methods in detection and diagnosis performance with data and time-efficient training.
arXiv Detail & Related papers (2022-01-18T19:41:29Z) - Vision Transformers are Robust Learners [65.91359312429147]
We study the robustness of the Vision Transformer (ViT) against common corruptions and perturbations, distribution shifts, and natural adversarial examples.
We present analyses that provide both quantitative and qualitative indications to explain why ViTs are indeed more robust learners.
arXiv Detail & Related papers (2021-05-17T02:39:22Z) - A novel Time-frequency Transformer and its Application in Fault
Diagnosis of Rolling Bearings [0.24214594180459362]
We propose a novel time-frequency Transformer (TFT) model inspired by the massive success of standard Transformer in sequence processing.
A new end-to-end fault diagnosis framework based on TFT is presented in this paper.
arXiv Detail & Related papers (2021-04-19T06:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.