Explaining Deep Neural Networks for Bearing Fault Detection with
Vibration Concepts
- URL: http://arxiv.org/abs/2310.11450v1
- Date: Tue, 17 Oct 2023 17:58:19 GMT
- Title: Explaining Deep Neural Networks for Bearing Fault Detection with
Vibration Concepts
- Authors: Thomas Decker, Michael Lebacher and Volker Tresp
- Abstract summary: We investigate how to leverage concept-based explanation techniques in the context of bearing fault detection with deep neural networks trained on vibration signals.
Our evaluations demonstrate that explaining opaque models in terms of vibration concepts enables human-comprehensible and intuitive insights about their inner workings.
- Score: 23.027545485830032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concept-based explanation methods, such as Concept Activation Vectors, are
potent means to quantify how abstract or high-level characteristics of input
data influence the predictions of complex deep neural networks. However,
applying them to industrial prediction problems is challenging as it is not
immediately clear how to define and access appropriate concepts for individual
use cases and specific data types. In this work, we investigate how to leverage
established concept-based explanation techniques in the context of bearing
fault detection with deep neural networks trained on vibration signals. Since
bearings are prevalent in almost every rotating equipment, ensuring the
reliability of intransparent fault detection models is crucial to prevent
costly repairs and downtimes of industrial machinery. Our evaluations
demonstrate that explaining opaque models in terms of vibration concepts
enables human-comprehensible and intuitive insights about their inner workings,
but the underlying assumptions need to be carefully validated first.
Related papers
- Explaining Deep Neural Networks by Leveraging Intrinsic Methods [0.9790236766474201]
This thesis contributes to the field of eXplainable AI, focusing on enhancing the interpretability of deep neural networks.
The core contributions lie in introducing novel techniques aimed at making these networks more interpretable by leveraging an analysis of their inner workings.
Secondly, this research delves into novel investigations on neurons within trained deep neural networks, shedding light on overlooked phenomena related to their activation values.
arXiv Detail & Related papers (2024-07-17T01:20:17Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Neuro-symbolic model for cantilever beams damage detection [0.0]
We propose a neuro-symbolic model for the detection of damages in cantilever beams based on a novel cognitive architecture.
The hybrid discriminative model is introduced under the name Logic Convolutional Neural Regressor.
arXiv Detail & Related papers (2023-05-04T13:12:39Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Understanding and Enhancing Robustness of Concept-based Models [41.20004311158688]
We study robustness of concept-based models to adversarial perturbations.
In this paper, we first propose and analyze different malicious attacks to evaluate the security vulnerability of concept based models.
We then propose a potential general adversarial training-based defense mechanism to increase robustness of these systems to the proposed malicious attacks.
arXiv Detail & Related papers (2022-11-29T10:43:51Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Unifying Model Explainability and Robustness via Machine-Checkable
Concepts [33.88198813484126]
We propose a robustness-assessment framework, at the core of which is the idea of using machine-checkable concepts.
Our framework defines a large number of concepts that the explanations could be based on and performs the explanation-conformity check at test time to assess prediction robustness.
Experiments on real-world datasets and human surveys show that our framework is able to enhance prediction robustness significantly.
arXiv Detail & Related papers (2020-07-01T05:21:16Z) - How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
Neural Networks [19.648814035399013]
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks.
We propose a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks.
We demonstrate the effectiveness and usefulness of our approach extensively in various experiments.
arXiv Detail & Related papers (2020-06-16T08:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.