The Thousand Faces of Explainable AI Along the Machine Learning Life
Cycle: Industrial Reality and Current State of Research
- URL: http://arxiv.org/abs/2310.07882v1
- Date: Wed, 11 Oct 2023 20:45:49 GMT
- Title: The Thousand Faces of Explainable AI Along the Machine Learning Life
Cycle: Industrial Reality and Current State of Research
- Authors: Thomas Decker, Ralf Gross, Alexander Koebler, Michael Lebacher, Ronald
Schnitzer, and Stefan H. Weber
- Abstract summary: Our findings are based on an extensive series of interviews regarding the role and applicability of XAI along the Machine Learning lifecycle.
Our findings also confirm that more efforts are needed to enable also non-expert users' interpretation and understanding of opaque AI models.
- Score: 37.69303106863453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the practical relevance of explainable
artificial intelligence (XAI) with a special focus on the producing industries
and relate them to the current state of academic XAI research. Our findings are
based on an extensive series of interviews regarding the role and applicability
of XAI along the Machine Learning (ML) lifecycle in current industrial practice
and its expected relevance in the future. The interviews were conducted among a
great variety of roles and key stakeholders from different industry sectors. On
top of that, we outline the state of XAI research by providing a concise review
of the relevant literature. This enables us to provide an encompassing overview
covering the opinions of the surveyed persons as well as the current state of
academic research. By comparing our interview results with the current research
approaches we reveal several discrepancies. While a multitude of different XAI
approaches exists, most of them are centered around the model evaluation phase
and data scientists. Their versatile capabilities for other stages are
currently either not sufficiently explored or not popular among practitioners.
In line with existing work, our findings also confirm that more efforts are
needed to enable also non-expert users' interpretation and understanding of
opaque AI models with existing methods and frameworks.
Related papers
- XAI meets LLMs: A Survey of the Relation between Explainable AI and Large Language Models [33.04648289133944]
Key challenges in Large Language Models (LLM) research focus on the importance of interpretability.
Driven by increasing interest from AI and business sectors, we highlight the need for transparency in LLMs.
Our paper advocates for a balanced approach that values interpretability equally with functional advancements.
arXiv Detail & Related papers (2024-07-21T19:23:45Z) - Explainable Artificial Intelligence and Multicollinearity : A Mini Review of Current Approaches [0.0]
Explainable Artificial Intelligence (XAI) methods help to understand the internal mechanism of machine learning models.
The list of informative features is one of the most common output of XAI methods.
Multicollinearity is one of the big issue that should be considered when XAI generates the explanation.
arXiv Detail & Related papers (2024-06-17T13:26:53Z) - A Systematic Literature Review on Explainability for Machine/Deep
Learning-based Software Engineering Research [23.966640472958105]
This paper presents a systematic literature review of approaches that aim to improve the explainability of AI models within the context of Software Engineering.
We aim to summarize the SE tasks where XAI techniques have shown success to date; (2) classify and analyze different XAI techniques; and (3) investigate existing evaluation approaches.
arXiv Detail & Related papers (2024-01-26T03:20:40Z) - A Survey on Large Language Model based Autonomous Agents [105.2509166861984]
Large language models (LLMs) have demonstrated remarkable potential in achieving human-level intelligence.
This paper delivers a systematic review of the field of LLM-based autonomous agents from a holistic perspective.
We present a comprehensive overview of the diverse applications of LLM-based autonomous agents in the fields of social science, natural science, and engineering.
arXiv Detail & Related papers (2023-08-22T13:30:37Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study [0.0]
This study measures cognitive load, task performance, and task time for implementation-independent XAI explanation types using a COVID-19 use case.
We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time.
arXiv Detail & Related papers (2023-04-18T09:52:09Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - A Survey of Knowledge Tracing: Models, Variants, and Applications [70.69281873057619]
Knowledge Tracing is one of the fundamental tasks for student behavioral data analysis.
We present three types of fundamental KT models with distinct technical routes.
We discuss potential directions for future research in this rapidly growing field.
arXiv Detail & Related papers (2021-05-06T13:05:55Z) - Principles and Practice of Explainable Machine Learning [12.47276164048813]
This report focuses on data-driven methods -- machine learning (ML) and pattern recognition models in particular.
With the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models.
We have undertaken a survey to help industry practitioners understand the field of explainable machine learning better.
arXiv Detail & Related papers (2020-09-18T14:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.