Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI
- URL: http://arxiv.org/abs/2508.00665v1
- Date: Fri, 01 Aug 2025 14:36:16 GMT
- Title: Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI
- Authors: Maryam Mosleh, Marie Devlin, Ellis Solaiman,
- Abstract summary: Most explainable AI (XAI) techniques focus on technical outputs but neglect user roles and comprehension.<n>This paper proposes a hybrid framework that integrates traditional XAI techniques with generative AI models and user personalisation.<n>Our aim is to move towards explainable AI that enhances transparency while supporting user-centred experiences.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence-driven adaptive learning systems are reshaping education through data-driven adaptation of learning experiences. Yet many of these systems lack transparency, offering limited insight into how decisions are made. Most explainable AI (XAI) techniques focus on technical outputs but neglect user roles and comprehension. This paper proposes a hybrid framework that integrates traditional XAI techniques with generative AI models and user personalisation to generate multimodal, personalised explanations tailored to user needs. We redefine explainability as a dynamic communication process tailored to user roles and learning goals. We outline the framework's design, key XAI limitations in education, and research directions on accuracy, fairness, and personalisation. Our aim is to move towards explainable AI that enhances transparency while supporting user-centred experiences.
Related papers
- AI-Powered Math Tutoring: Platform for Personalized and Adaptive Education [0.0]
We introduce a novel multi-agent AI tutoring platform that combines adaptive and personalized feedback, structured course generation, and textbook knowledge retrieval.<n>This system allows students to learn new topics while identifying and targeting their weaknesses, revise for exams effectively, and practice on an unlimited number of personalized exercises.
arXiv Detail & Related papers (2025-07-14T20:35:16Z) - Explanation User Interfaces: A Systematic Literature Review [4.8032298282965025]
presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process.<n>This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed in the academic literature.<n>We also present a framework for Human-cEnteRed developMent of Explainable user interfaceS (HERMES) to guide practitioners and academics in the design and evaluation of XUIs.
arXiv Detail & Related papers (2025-05-26T15:00:17Z) - Explainable artificial intelligence (XAI): from inherent explainability to large language models [0.0]
Explainable AI (XAI) techniques facilitate the explainability or interpretability of machine learning models.<n>This paper details the advancements of explainable AI methods, from inherently interpretable models to modern approaches.<n>We review explainable AI techniques that leverage vision-language model (VLM) frameworks to automate or improve the explainability of other machine learning models.
arXiv Detail & Related papers (2025-01-17T06:16:57Z) - Human-Centric eXplainable AI in Education [0.0]
This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape.
It emphasizes its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools.
It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement.
arXiv Detail & Related papers (2024-10-18T14:02:47Z) - Towards Automated Knowledge Integration From Human-Interpretable Representations [55.2480439325792]
We introduce and motivate theoretically the principles of informed meta-learning enabling automated and controllable inductive bias selection.<n>We empirically demonstrate the potential benefits and limitations of informed meta-learning in improving data efficiency and generalisation.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - XAI for All: Can Large Language Models Simplify Explainable AI? [0.0699049312989311]
"x-[plAIn]" is a new approach to make XAI more accessible to a wider audience through a custom Large Language Model.
Our goal was to design a model that can generate clear, concise summaries of various XAI methods.
Results from our use-case studies show that our model is effective in providing easy-to-understand, audience-specific explanations.
arXiv Detail & Related papers (2024-01-23T21:47:12Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.