Toward enriched Cognitive Learning with XAI
- URL: http://arxiv.org/abs/2312.12290v1
- Date: Tue, 19 Dec 2023 16:13:47 GMT
- Title: Toward enriched Cognitive Learning with XAI
- Authors: Muhammad Suffian, Ulrike Kuhl, Jose M. Alonso-Moral, Alessandro
Bogliolo
- Abstract summary: We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As computational systems supported by artificial intelligence (AI) techniques
continue to play an increasingly pivotal role in making high-stakes
recommendations and decisions across various domains, the demand for
explainable AI (XAI) has grown significantly, extending its impact into
cognitive learning research. Providing explanations for novel concepts is
recognised as a fundamental aid in the learning process, particularly when
addressing challenges stemming from knowledge deficiencies and skill
application. Addressing these difficulties involves timely explanations and
guidance throughout the learning process, prompting the interest of AI experts
in developing explainer models. In this paper, we introduce an intelligent
system (CL-XAI) for Cognitive Learning which is supported by XAI, focusing on
two key research objectives: exploring how human learners comprehend the
internal mechanisms of AI models using XAI tools and evaluating the
effectiveness of such tools through human feedback. The use of CL-XAI is
illustrated with a game-inspired virtual use case where learners tackle
combinatorial problems to enhance problem-solving skills and deepen their
understanding of complex concepts, highlighting the potential for
transformative advances in cognitive learning and co-learning.
Related papers
- Integrating Cognitive AI with Generative Models for Enhanced Question Answering in Skill-based Learning [3.187381965457262]
This paper proposes a novel approach that merges Cognitive AI and Generative AI to address these challenges.
We employ a structured knowledge representation, the TMK (Task-Method-Knowledge) model, to encode skills taught in an online Knowledge-based AI course.
arXiv Detail & Related papers (2024-07-28T04:21:22Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - Transferring Domain Knowledge with (X)AI-Based Learning Systems [3.0059120458540383]
Explainable artificial intelligence (XAI) has conventionally been used to make black-box artificial intelligence systems interpretable.
An (X)AI system is trained on experts' past decisions and is then employed to teach novices by providing examples and explanations.
We show that (X)AI-based learning systems are able to induce learning in novices and that their cognitive styles moderate learning.
arXiv Detail & Related papers (2024-06-03T13:56:30Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.