FaultFormer: Pretraining Transformers for Adaptable Bearing Fault Classification
- URL: http://arxiv.org/abs/2312.02380v3
- Date: Wed, 29 May 2024 15:13:29 GMT
- Title: FaultFormer: Pretraining Transformers for Adaptable Bearing Fault Classification
- Authors: Anthony Zhou, Amir Barati Farimani,
- Abstract summary: We present a novel self-supervised pretraining and fine-tuning framework based on transformer models.
In particular, we investigate different tokenization and data augmentation strategies to reach state-of-the-art accuracies.
This introduces a new paradigm where models can be pretrained on unlabeled data from different bearings, faults, and machinery and quickly deployed to new, data-scarce applications.
- Score: 7.136205674624813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growth of global consumption has motivated important applications of deep learning to smart manufacturing and machine health monitoring. In particular, analyzing vibration data offers great potential to extract meaningful insights into predictive maintenance by the detection of bearing faults. Deep learning can be a powerful method to predict these mechanical failures; however, they lack generalizability to new tasks or datasets and require expensive, labeled mechanical data. We address this by presenting a novel self-supervised pretraining and fine-tuning framework based on transformer models. In particular, we investigate different tokenization and data augmentation strategies to reach state-of-the-art accuracies using transformer models. Furthermore, we demonstrate self-supervised masked pretraining for vibration signals and its application to low-data regimes, task adaptation, and dataset adaptation. Pretraining is able to improve performance on scarce, unseen training samples, as well as when fine-tuning on fault classes outside of the pretraining distribution. Furthermore, pretrained transformers are shown to be able to generalize to a different dataset in a few-shot manner. This introduces a new paradigm where models can be pretrained on unlabeled data from different bearings, faults, and machinery and quickly deployed to new, data-scarce applications to suit specific manufacturing needs.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - Automatic Rule Induction for Efficient Semi-Supervised Learning [56.91428251227253]
Semi-supervised learning has shown promise in allowing NLP models to generalize from small amounts of labeled data.
Pretrained transformer models act as black-box correlation engines that are difficult to explain and sometimes behave unreliably.
We propose tackling both of these challenges via Automatic Rule Induction (ARI), a simple and general-purpose framework.
arXiv Detail & Related papers (2022-05-18T16:50:20Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Who supervises the supervisor? Model monitoring in production using deep
feature embeddings with applications to workpiece inspection [0.0]
Machine learning has led to vast improvements in the area of autonomous process supervision.
One of the main challenges is the monitoring of live deployments of these machine learning systems.
We propose an unsupervised framework that acts on top of a supervised classification system.
arXiv Detail & Related papers (2022-01-17T19:25:33Z) - Learn to Predict Vertical Track Irregularity with Extremely Imbalanced
Data [6.448383767373112]
We showcase an application framework for predicting vertical track irregularity, based on a real-world, large-scale dataset produced by several operating railways in China.
We also proposed a novel approach for handling imbalanced data in time series prediction tasks with adaptive data sampling and penalized loss.
arXiv Detail & Related papers (2020-12-05T15:49:39Z) - How Effective is Task-Agnostic Data Augmentation for Pretrained
Transformers? [7.727662147015879]
Task-agnostic forms of data augmentation have proven widely effective in computer vision, even on pretrained models.
We ask how effective these techniques really are when applied to pretrained transformers.
We observe a negative result, finding that techniques which previously reported strong improvements for non-pretrained models fail to consistently improve performance for pretrained transformers.
arXiv Detail & Related papers (2020-10-05T03:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.