An interpretable deep learning method for bearing fault diagnosis
- URL: http://arxiv.org/abs/2308.10292v1
- Date: Sun, 20 Aug 2023 15:22:08 GMT
- Title: An interpretable deep learning method for bearing fault diagnosis
- Authors: Hao Lu, Austin M. Bray, Chao Hu, Andrew T. Zimmerman, Hongyi Xu
- Abstract summary: We utilize a convolutional neural network (CNN) with Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations to form an interpretable Deep Learning (DL) method for classifying bearing faults.
During the model evaluation process, the proposed approach retrieves prediction basis samples from the health library according to the similarity of the feature importance.
- Score: 12.069344716912843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning (DL) has gained popularity in recent years as an effective tool
for classifying the current health and predicting the future of industrial
equipment. However, most DL models have black-box components with an underlying
structure that is too complex to be interpreted and explained to human users.
This presents significant challenges when deploying these models for
safety-critical maintenance tasks, where non-technical personnel often need to
have complete trust in the recommendations these models give. To address these
challenges, we utilize a convolutional neural network (CNN) with
Gradient-weighted Class Activation Mapping (Grad-CAM) activation map
visualizations to form an interpretable DL method for classifying bearing
faults. After the model training process, we apply Grad-CAM to identify a
training sample's feature importance and to form a library of diagnosis
knowledge (or health library) containing training samples with annotated
feature maps. During the model evaluation process, the proposed approach
retrieves prediction basis samples from the health library according to the
similarity of the feature importance. The proposed method can be easily applied
to any CNN model without modifying the model architecture, and our experimental
results show that this method can select prediction basis samples that are
intuitively and physically meaningful, improving the model's trustworthiness
for human users.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - A Modified PINN Approach for Identifiable Compartmental Models in
Epidemiology with Applications to COVID-19 [0.0]
We present an approach toward analyzing accessible data on Covid-19's U.S. development using a variation of the "Physics Informed Neural Networks"
Aspects of identifiability of the model parameters are also assessed, as well as methods of denoising available data using a wavelet transform.
arXiv Detail & Related papers (2022-08-01T23:09:32Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Improving a neural network model by explanation-guided training for
glioma classification based on MRI data [0.0]
Interpretability methods have become a popular way to gain insight into the decision-making process of deep learning models.
We propose a method for explanation-guided training that uses a Layer-wise relevance propagation (LRP) technique.
We experimentally verified our method on a convolutional neural network (CNN) model for low-grade and high-grade glioma classification problems.
arXiv Detail & Related papers (2021-07-05T13:27:28Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.