Deep Learning-based Machine Condition Diagnosis using Short-time Fourier Transformation Variants
- URL: http://arxiv.org/abs/2408.09649v2
- Date: Mon, 14 Oct 2024 10:05:49 GMT
- Title: Deep Learning-based Machine Condition Diagnosis using Short-time Fourier Transformation Variants
- Authors: Eduardo Jr Piedad, Zherish Galvin Mayordo, Eduardo Prieto-Araujo, Oriol Gomis-Bellmunt,
- Abstract summary: This study converts time-series motor current signals to time-frequency 2D plots using Short-time Fourier Transform (STFT) methods.
Deep learning (DL) models based on the previous Convolutional Neural Network (CNN) architecture are trained and validated.
Four methods outperformed the previous best ML method with 93.20% accuracy, while all five outperformed previous 2D-plot-based methods with accuracy of 80.25, 74.80, and 82.80%, respectively.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In motor condition diagnosis, electrical current signature serves as an alternative feature to vibration-based sensor data, which is a more expensive and invasive method. Machine learning (ML) techniques have been emerging in diagnosing motor conditions using only motor phase current signals. This study converts time-series motor current signals to time-frequency 2D plots using Short-time Fourier Transform (STFT) methods. The motor current signal dataset consists of 3,750 sample points with five classes - one healthy and four synthetically-applied motor fault conditions, and with five loading conditions: 0, 25, 50, 75, and 100%. Five transformation methods are used on the dataset: non-overlap and overlap STFTs, non-overlap and overlap realigned STFTs, and synchrosqueezed STFT. Then, deep learning (DL) models based on the previous Convolutional Neural Network (CNN) architecture are trained and validated from generated plots of each method. The DL models of overlap-STFT, overlap R-STFT, non-overlap STFT, non-overlap R-STFT, and synchrosqueezed-STFT performed exceptionally with an average accuracy of 97.65, 96.03, 96.08, 96.32, and 88.27%, respectively. Four methods outperformed the previous best ML method with 93.20% accuracy, while all five outperformed previous 2D-plot-based methods with accuracy of 80.25, 74.80, and 82.80%, respectively, using the same dataset, same DL architecture, and validation steps.
Related papers
- Wavelet Logic Machines: Learning and Reasoning in the Spectral Domain Without Neural Networks [0.0]
We introduce a fully spectral learning framework that eliminates traditional neural layers by operating entirely in the wavelet domain.<n>The model applies learnable nonlinear transformations, including soft-thresholding and gain-phase modulation, directly to wavelet coefficients.<n>It also includes a differentiable wavelet basis selection mechanism, enabling adaptive processing using families such as Haar, Daubechies, and Biorthogonal wavelets.
arXiv Detail & Related papers (2025-07-18T01:28:17Z) - Benchmarking Traditional Machine Learning and Deep Learning Models for Fault Detection in Power Transformers [0.0]
This study presents a comparative analysis of conventional machine learning (ML) algorithms and deep learning (DL) algorithms for fault classification of power transformers.<n>Using a condition-monitored dataset spanning 10 months, various gas concentration features were normalized and used to train five ML classifiers.<n>The RF model achieved the highest ML accuracy at 86.82%, while the 1D-CNN model attained a close 86.30%.
arXiv Detail & Related papers (2025-05-07T15:19:53Z) - DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications [59.488352977043974]
This study proposes DispFormer, a transformer-based neural network for inverting the $v_s$ profile from Rayleigh-wave phase and group dispersion curves.
Results indicate that zero-shot DispFormer, even without any labeled data, produces inversion profiles that match well with the ground truth.
arXiv Detail & Related papers (2025-01-08T09:08:24Z) - FlowTS: Time Series Generation via Rectified Flow [67.41208519939626]
FlowTS is an ODE-based model that leverages rectified flow with straight-line transport in probability space.
For unconditional setting, FlowTS achieves state-of-the-art performance, with context FID scores of 0.019 and 0.011 on Stock and ETTh datasets.
For conditional setting, we have achieved superior performance in solar forecasting.
arXiv Detail & Related papers (2024-11-12T03:03:23Z) - One-Step Diffusion Distillation through Score Implicit Matching [74.91234358410281]
We present Score Implicit Matching (SIM) a new approach to distilling pre-trained diffusion models into single-step generator models.
SIM shows strong empirical performances for one-step generators.
By applying SIM to a leading transformer-based diffusion model, we distill a single-step generator for text-to-image generation.
arXiv Detail & Related papers (2024-10-22T08:17:20Z) - Exploring Wavelet Transformations for Deep Learning-based Machine Condition Diagnosis [0.0]
This research transforms time-series current signals into time-frequency 2D representations via Wavelet Transform.
The study employs five WT-based techniques: WT-Amor, WT-Bump, WT-Morse, WSST-Amor, and WSST-Bump.
The DL models for WT-Amor, WT-Bump, and WT-Morse showed remarkable effectiveness with peak model accuracy of 90.93, 89.20, and 93.73%, respectively.
arXiv Detail & Related papers (2024-08-19T02:06:33Z) - DKDL-Net: A Lightweight Bearing Fault Detection Model via Decoupled Knowledge Distillation and Low-Rank Adaptation Fine-tuning [0.0]
This paper proposes a lightweight bearing fault diagnosis model DKDL-Net to solve these challenges.
The model is trained on the CWRU data set by decoupling knowledge distillation and low rank adaptive fine tuning.
Experiments show that DKDL-Net achieves 99.48% accuracy in computational complexity on the test set while maintaining model performance.
arXiv Detail & Related papers (2024-06-10T09:09:08Z) - Few-shot Learning using Data Augmentation and Time-Frequency
Transformation for Time Series Classification [6.830148185797109]
We propose a novel few-shot learning framework through data augmentation.
We also develop a sequence-spectrogram neural network (SSNN)
Our methodology demonstrates its applicability of addressing the few-shot problems for time series classification.
arXiv Detail & Related papers (2023-11-06T15:32:50Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Transformer-based approaches to Sentiment Detection [55.41644538483948]
We examined the performance of four different types of state-of-the-art transformer models for text classification.
The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions.
arXiv Detail & Related papers (2023-03-13T17:12:03Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - HFedMS: Heterogeneous Federated Learning with Memorable Data Semantics
in Industrial Metaverse [49.1501082763252]
This paper presents HFEDMS for incorporating practical FL into the emerging Industrial Metaverse.
It reduces data heterogeneity through dynamic grouping and training mode conversion.
Then, it compensates for the forgotten knowledge by fusing compressed historical data semantics.
Experiments have been conducted on the streamed non-i.i.d. FEMNIST dataset using 368 simulated devices.
arXiv Detail & Related papers (2022-11-07T04:33:24Z) - Low Latency Real-Time Seizure Detection Using Transfer Deep Learning [0.0]
Scalp electroencephalogram (EEG) signals inherently have a low signal-to-noise ratio.
Most popular approaches to seizure detection using deep learning do not jointly model this information or require multiple passes over the signal.
In this paper, we exploit both simultaneously by converting the multichannel signal to a grayscale image and using transfer learning to achieve high performance.
arXiv Detail & Related papers (2022-02-16T00:03:00Z) - TransMOT: Spatial-Temporal Graph Transformer for Multiple Object
Tracking [74.82415271960315]
We propose a solution named TransMOT to efficiently model the spatial and temporal interactions among objects in a video.
TransMOT is not only more computationally efficient than the traditional Transformer, but it also achieves better tracking accuracy.
The proposed method is evaluated on multiple benchmark datasets including MOT15, MOT16, MOT17, and MOT20.
arXiv Detail & Related papers (2021-04-01T01:49:05Z) - Depthwise Spatio-Temporal STFT Convolutional Neural Networks for Human
Action Recognition [42.400429835080416]
Conventional 3D convolutional neural networks (CNNs) are computationally expensive, memory intensive, prone to overfitting and most importantly, there is a need to improve their feature learning capabilities.
We propose new class of convolutional blocks that can serve as an alternative to 3D convolutional layer and its variants in 3D CNNs.
Our evaluation on seven action recognition datasets, including Something-something v1 and v2, Jester, Diving Kinetics-400, UCF 101, and HMDB 51, demonstrate that STFT blocks based 3D CNNs achieve on par or even better performance compared to the state-of
arXiv Detail & Related papers (2020-07-22T12:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.