Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault
Detection with Deep Learning
- URL: http://arxiv.org/abs/2310.12967v1
- Date: Thu, 19 Oct 2023 17:58:11 GMT
- Title: Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault
Detection with Deep Learning
- Authors: Thomas Decker, Michael Lebacher, and Volker Tresp
- Abstract summary: In this work, we focus on the specific task of detecting faults in rolling element bearings from vibration signals.
We propose a novel and domain-specific feature attribution framework that allows us to evaluate how well the underlying logic of a model corresponds with expert reasoning.
- Score: 20.488966890562004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning has already been successfully applied to analyze industrial
sensor data in a variety of relevant use cases. However, the opaque nature of
many well-performing methods poses a major obstacle for real-world deployment.
Explainable AI (XAI) and especially feature attribution techniques promise to
enable insights about how such models form their decision. But the plain
application of such methods often fails to provide truly informative and
problem-specific insights to domain experts. In this work, we focus on the
specific task of detecting faults in rolling element bearings from vibration
signals. We propose a novel and domain-specific feature attribution framework
that allows us to evaluate how well the underlying logic of a model corresponds
with expert reasoning. Utilizing the framework we are able to validate the
trustworthiness and to successfully anticipate the generalization ability of
different well-performing deep learning models. Our methodology demonstrates
how signal processing tools can effectively be used to enhance Explainable AI
techniques and acts as a template for similar problems.
Related papers
- Underwater Object Detection in the Era of Artificial Intelligence: Current, Challenge, and Future [119.88454942558485]
Underwater object detection (UOD) aims to identify and localise objects in underwater images or videos.
In recent years, artificial intelligence (AI) based methods, especially deep learning methods, have shown promising performance in UOD.
arXiv Detail & Related papers (2024-10-08T00:25:33Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - On-Robot Bayesian Reinforcement Learning for POMDPs [16.667924736270415]
This paper advances Bayesian reinforcement learning for robotics by proposing a specialized framework for physical systems.
We capture this knowledge in a factored representation, then demonstrate the posterior factorizes in a similar shape, and ultimately formalize the model in a Bayesian framework.
We then introduce a sample-based online solution method, based on Monte-Carlo tree search and particle filtering, specialized to solve the resulting model.
arXiv Detail & Related papers (2023-07-22T01:16:29Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - A Practical Tutorial on Explainable AI Techniques [5.671062637797752]
This tutorial is meant to be the go-to handbook for any audience with a computer science background.
It aims at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box.
arXiv Detail & Related papers (2021-11-13T17:47:31Z) - Can Explainable AI Explain Unfairness? A Framework for Evaluating
Explainable AI [3.4823710414760516]
Despite XAI tools' strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for fairwashing
We created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness.
We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias.
arXiv Detail & Related papers (2021-06-14T15:14:03Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.