A Human-Centric Approach to Explainable AI for Personalized Education
- URL: http://arxiv.org/abs/2505.22541v1
- Date: Wed, 28 May 2025 16:23:48 GMT
- Title: A Human-Centric Approach to Explainable AI for Personalized Education
- Authors: Vinitra Swamy,
- Abstract summary: This thesis aims to bring human needs to the forefront of eXplainable AI (XAI) research.<n>We propose four novel technical contributions in interpretability with a multimodal modular architecture.<n>Our work lays a foundation for human-centric AI systems that balance state-of-the-art performance with built-in transparency and trust.
- Score: 1.0878040851638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks form the backbone of artificial intelligence research, with potential to transform the human experience in areas ranging from autonomous driving to personal assistants, healthcare to education. However, their integration into the daily routines of real-world classrooms remains limited. It is not yet common for a teacher to assign students individualized homework targeting their specific weaknesses, provide students with instant feedback, or simulate student responses to a new exam question. While these models excel in predictive performance, this lack of adoption can be attributed to a significant weakness: the lack of explainability of model decisions, leading to a lack of trust from students, parents, and teachers. This thesis aims to bring human needs to the forefront of eXplainable AI (XAI) research, grounded in the concrete use case of personalized learning and teaching. We frame the contributions along two verticals: technical advances in XAI and their aligned human studies. We investigate explainability in AI for education, revealing systematic disagreements between post-hoc explainers and identifying a need for inherently interpretable model architectures. We propose four novel technical contributions in interpretability with a multimodal modular architecture (MultiModN), an interpretable mixture-of-experts model (InterpretCC), adversarial training for explainer stability, and a theory-driven LLM-XAI framework to present explanations to students (iLLuMinaTE), which we evaluate in diverse settings with professors, teachers, learning scientists, and university students. By combining empirical evaluations of existing explainers with novel architectural designs and human studies, our work lays a foundation for human-centric AI systems that balance state-of-the-art performance with built-in transparency and trust.
Related papers
- Beyond Automation: Socratic AI, Epistemic Agency, and the Implications of the Emergence of Orchestrated Multi-Agent Learning Architectures [0.0]
Generative AI is no longer a peripheral tool in higher education.<n>This paper presents findings from a controlled experiment evaluating a Socratic AI Tutor.<n>Students using the Tutor reported significantly greater support for critical, independent, and reflective thinking.
arXiv Detail & Related papers (2025-08-07T07:49:03Z) - AI-Powered Math Tutoring: Platform for Personalized and Adaptive Education [0.0]
We introduce a novel multi-agent AI tutoring platform that combines adaptive and personalized feedback, structured course generation, and textbook knowledge retrieval.<n>This system allows students to learn new topics while identifying and targeting their weaknesses, revise for exams effectively, and practice on an unlimited number of personalized exercises.
arXiv Detail & Related papers (2025-07-14T20:35:16Z) - A Computational Model of Inclusive Pedagogy: From Understanding to Application [1.2058600649065616]
Human education transcends mere knowledge transfer, it relies on co-adaptation dynamics.<n>Despite its centrality, computational models of co-adaptive teacher-student interactions (T-SI) remain underdeveloped.<n>We present a computational T-SI model that integrates contextual insights on human education into a testable framework.
arXiv Detail & Related papers (2025-05-02T12:26:31Z) - Resurrecting Socrates in the Age of AI: A Study Protocol for Evaluating a Socratic Tutor to Support Research Question Development in Higher Education [0.0]
This protocol lays out a study grounded in constructivist learning theory to evaluate a novel AI-based Socratic Tutor.<n>The tutor engages students through iterative, reflective questioning, aiming to promote System 2 thinking.<n>This study aims to advance the understanding of how generative AI can be pedagogically aligned to support, not replace, human cognition.
arXiv Detail & Related papers (2025-04-05T00:49:20Z) - Form-Substance Discrimination: Concept, Cognition, and Pedagogy [55.2480439325792]
This paper examines form-substance discrimination as an essential learning outcome for curriculum development in higher education.<n>We propose practical strategies for fostering this ability through curriculum design, assessment practices, and explicit instruction.
arXiv Detail & Related papers (2025-04-01T04:15:56Z) - The Imitation Game for Educational AI [23.71250100390303]
We present a novel evaluation framework based on a two-phase Turing-like test.<n>In Phase 1, students provide open-ended responses to questions, revealing natural misconceptions.<n>In Phase 2, both AI and human experts, conditioned on each student's specific mistakes, generate distractors for new related questions.
arXiv Detail & Related papers (2025-02-21T01:14:55Z) - Education in the Era of Neurosymbolic AI [0.6468510459310326]
We propose a system that leverages the unique affordances of pedagogical agents as critical components of a hybrid NAI architecture.
We conclude that education in the era of NAI will make learning more accessible, equitable, and aligned with real-world skills.
arXiv Detail & Related papers (2024-11-16T19:18:39Z) - When LLMs Learn to be Students: The SOEI Framework for Modeling and Evaluating Virtual Student Agents in Educational Interaction [12.070907646464537]
We propose the SOEI framework for constructing and evaluating personality-aligned Virtual Student Agents (LVSAs) in classroom scenarios.<n>We generate five LVSAs based on Big Five traits through LoRA fine-tuning and expert-informed prompt design.<n>Our results provide: (1) an educationally and psychologically grounded generation pipeline for LLM-based student agents; (2) a hybrid, scalable evaluation framework for behavioral realism; and (3) empirical insights into the pedagogical utility of LVSAs in shaping instructional adaptation.
arXiv Detail & Related papers (2024-10-21T07:18:24Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.