A Review of Explainable Artificial Intelligence in Manufacturing
- URL: http://arxiv.org/abs/2107.02295v1
- Date: Mon, 5 Jul 2021 21:59:55 GMT
- Title: A Review of Explainable Artificial Intelligence in Manufacturing
- Authors: Georgios Sofianidis, Jo\v{z}e M. Ro\v{z}anec, Dunja Mladeni\'c,
Dimosthenis Kyriazis
- Abstract summary: The implementation of Artificial Intelligence (AI) systems in the manufacturing domain enables higher production efficiency, outstanding performance, and safer operations.
Despite the high accuracy of these models, they are mostly considered black boxes: they are unintelligible to the human.
We present an overview of Explainable Artificial Intelligence (XAI) techniques as a means of boosting the transparency of models.
- Score: 0.8793721044482613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The implementation of Artificial Intelligence (AI) systems in the
manufacturing domain enables higher production efficiency, outstanding
performance, and safer operations, leveraging powerful tools such as deep
learning and reinforcement learning techniques. Despite the high accuracy of
these models, they are mostly considered black boxes: they are unintelligible
to the human. Opaqueness affects trust in the system, a factor that is critical
in the context of decision-making. We present an overview of Explainable
Artificial Intelligence (XAI) techniques as a means of boosting the
transparency of models. We analyze different metrics to evaluate these
techniques and describe several application scenarios in the manufacturing
domain.
Related papers
- Explainability in AI Based Applications: A Framework for Comparing Different Techniques [2.5874041837241304]
In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
arXiv Detail & Related papers (2024-10-28T09:45:34Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - SHIELD: A regularization technique for eXplainable Artificial Intelligence [9.658282892513386]
This paper introduces SHIELD (Selective Hidden Input Evaluation for Learning Dynamics), a regularization technique for explainable artificial intelligence.
In contrast to conventional approaches, SHIELD regularization seamlessly integrates into the objective function, enhancing model explainability while also improving performance.
Experimental validation on benchmark datasets underscores SHIELD's effectiveness in improving Artificial Intelligence model explainability and overall performance.
arXiv Detail & Related papers (2024-04-03T09:56:38Z) - Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review [1.6006550105523192]
Review explores the pivotal role of prompt engineering in unleashing the capabilities of Large Language Models (LLMs)
Examines both foundational and advanced methodologies of prompt engineering, including techniques such as self-consistency, chain-of-thought, and generated knowledge.
Review also reflects the essential role of prompt engineering in advancing AI capabilities, providing a structured framework for future research and application.
arXiv Detail & Related papers (2023-10-23T09:15:18Z) - Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault
Detection with Deep Learning [20.488966890562004]
In this work, we focus on the specific task of detecting faults in rolling element bearings from vibration signals.
We propose a novel and domain-specific feature attribution framework that allows us to evaluate how well the underlying logic of a model corresponds with expert reasoning.
arXiv Detail & Related papers (2023-10-19T17:58:11Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - AI-based Modeling and Data-driven Evaluation for Smart Manufacturing
Processes [56.65379135797867]
We propose a dynamic algorithm for gaining useful insights about semiconductor manufacturing processes.
We elaborate on the utilization of a Genetic Algorithm and Neural Network to propose an intelligent feature selection algorithm.
arXiv Detail & Related papers (2020-08-29T14:57:53Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.