Interpreting What Typical Fault Signals Look Like via Prototype-matching
- URL: http://arxiv.org/abs/2403.07033v1
- Date: Mon, 11 Mar 2024 05:47:07 GMT
- Title: Interpreting What Typical Fault Signals Look Like via Prototype-matching
- Authors: Qian Chen and Xingjian Dong and Zhike Peng
- Abstract summary: Prototype matching network (PMN) is proposed by combining the human-inherent prototype-matching with autoencoder (AE)
It has three interpreting paths on classification logic, fault prototypes, and matching contributions.
This ability broadens human understanding and provides a promising solution from interpretability research to AI-for-Science.
- Score: 3.774984871230879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks, with powerful nonlinear mapping and classification
capabilities, are widely applied in mechanical fault diagnosis to ensure
safety. However, being typical black-box models, their application is limited
in high-reliability-required scenarios. To understand the classification logic
and explain what typical fault signals look like, the prototype matching
network (PMN) is proposed by combining the human-inherent prototype-matching
with autoencoder (AE). The PMN matches AE-extracted feature with each prototype
and selects the most similar prototype as the prediction result. It has three
interpreting paths on classification logic, fault prototypes, and matching
contributions. Conventional diagnosis and domain generalization experiments
demonstrate its competitive diagnostic performance and distinguished advantages
in representation learning. Besides, the learned typical fault signals (i.e.,
sample-level prototypes) showcase the ability for denoising and extracting
subtle key features that experts find challenging to capture. This ability
broadens human understanding and provides a promising solution from
interpretability research to AI-for-Science.
Related papers
- ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning [0.21079694661943607]
ProtoECGNet is a prototype deep learning model for interpretable, multilabel ECG classification.
We evaluate ProtoECGNet on all 71 diagnostic labels from the PTB-XL dataset.
ProtoECGNet shows that prototype learning can be effectively scaled to complex, multi-label time-series classification.
arXiv Detail & Related papers (2025-04-11T17:23:37Z) - Sparse Prototype Network for Explainable Pedestrian Behavior Prediction [60.80524827122901]
We present Sparse Prototype Network (SPN), an explainable method designed to simultaneously predict a pedestrian's future action, trajectory, and pose.
Regularized by mono-semanticity and clustering constraints, the prototypes learn consistent and human-understandable features.
arXiv Detail & Related papers (2024-10-16T03:33:40Z) - GAProtoNet: A Multi-head Graph Attention-based Prototypical Network for Interpretable Text Classification [1.170190320889319]
We introduce GAProtoNet, a novel white-box Multi-head Graph Attention-based Prototypical Network.
Our approach achieves superior results without sacrificing the accuracy of the original black-box LMs.
Our case study and visualization of prototype clusters also demonstrate the efficiency in explaining the decisions of black-box models built with LMs.
arXiv Detail & Related papers (2024-09-20T08:15:17Z) - Enhanced Prototypical Part Network (EPPNet) For Explainable Image Classification Via Prototypes [16.528373143163275]
We introduce the Enhanced Prototypical Part Network (EPPNet) for image classification.
EPPNet achieves strong performance while discovering relevant prototypes that can be used to explain the classification results.
Our evaluations on the CUB-200-2011 dataset show that the EPPNet outperforms state-of-the-art xAI-based methods.
arXiv Detail & Related papers (2024-08-08T17:26:56Z) - Mixed Prototype Consistency Learning for Semi-supervised Medical Image Segmentation [0.0]
We propose the Mixed Prototype Consistency Learning (MPCL) framework, which includes a Mean Teacher and an auxiliary network.
The Mean Teacher generates prototypes for labeled and unlabeled data, while the auxiliary network produces additional prototypes for mixed data processed by CutMix.
High-quality global prototypes for each class are formed by fusing two enhanced prototypes, optimizing the distribution of hidden embeddings used in consistency learning.
arXiv Detail & Related papers (2024-04-16T16:51:12Z) - ProtoTEx: Explaining Model Decisions with Prototype Tensors [27.779971257213553]
ProtoTEx is a novel white-box NLP classification architecture based on prototype networks.
We describe a novel interleaved training algorithm that effectively handles classes characterized by the absence of indicative features.
On a propaganda detection task, ProtoTEx accuracy matches BART-large and exceeds BERT-large with the added benefit of providing faithful explanations.
arXiv Detail & Related papers (2022-04-11T22:08:45Z) - Predicting Unreliable Predictions by Shattering a Neural Network [145.3823991041987]
Piecewise linear neural networks can be split into subfunctions.
Subfunctions have their own activation pattern, domain, and empirical error.
Empirical error for the full network can be written as an expectation over subfunctions.
arXiv Detail & Related papers (2021-06-15T18:34:41Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.