Transferring Domain Knowledge with (X)AI-Based Learning Systems
- URL: http://arxiv.org/abs/2406.01329v1
- Date: Mon, 3 Jun 2024 13:56:30 GMT
- Title: Transferring Domain Knowledge with (X)AI-Based Learning Systems
- Authors: Philipp Spitzer, Niklas Kühl, Marc Goutier, Manuel Kaschura, Gerhard Satzger,
- Abstract summary: Explainable artificial intelligence (XAI) has conventionally been used to make black-box artificial intelligence systems interpretable.
An (X)AI system is trained on experts' past decisions and is then employed to teach novices by providing examples and explanations.
We show that (X)AI-based learning systems are able to induce learning in novices and that their cognitive styles moderate learning.
- Score: 3.0059120458540383
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In numerous high-stakes domains, training novices via conventional learning systems does not suffice. To impart tacit knowledge, experts' hands-on guidance is imperative. However, training novices by experts is costly and time-consuming, increasing the need for alternatives. Explainable artificial intelligence (XAI) has conventionally been used to make black-box artificial intelligence systems interpretable. In this work, we utilize XAI as an alternative: An (X)AI system is trained on experts' past decisions and is then employed to teach novices by providing examples coupled with explanations. In a study with 249 participants, we measure the effectiveness of such an approach for a classification task. We show that (X)AI-based learning systems are able to induce learning in novices and that their cognitive styles moderate learning. Thus, we take the first steps to reveal the impact of XAI on human learning and point AI developers to future options to tailor the design of (X)AI-based learning systems.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Development of an Adaptive Multi-Domain Artificial Intelligence System Built using Machine Learning and Expert Systems Technologies [0.0]
An artificial general intelligence (AGI) has been an elusive goal in artificial intelligence (AI) research for some time.
An AGI would have the capability, like a human, to be exposed to a new problem domain, learn about it and then use reasoning processes to make decisions.
This paper presents a small step towards producing an AGI.
arXiv Detail & Related papers (2024-06-17T07:21:44Z) - Towards a general framework for improving the performance of classifiers using XAI methods [0.0]
This paper proposes a framework for automatically improving the performance of pre-trained Deep Learning (DL) classifiers using XAI methods.
We will call auto-encoder-based and encoder-decoder-based, and discuss their key aspects.
arXiv Detail & Related papers (2024-03-15T15:04:20Z) - Toward enriched Cognitive Learning with XAI [44.99833362998488]
We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
arXiv Detail & Related papers (2023-12-19T16:13:47Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.