Human-Centric eXplainable AI in Education
- URL: http://arxiv.org/abs/2410.19822v1
- Date: Fri, 18 Oct 2024 14:02:47 GMT
- Title: Human-Centric eXplainable AI in Education
- Authors: Subhankar Maity, Aniket Deroy,
- Abstract summary: This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape.
It emphasizes its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools.
It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement.
- Score: 0.0
- License:
- Abstract: As artificial intelligence (AI) becomes more integrated into educational environments, how can we ensure that these systems are both understandable and trustworthy? The growing demand for explainability in AI systems is a critical area of focus. This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape, emphasizing its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools, particularly through the innovative use of large language models (LLMs). What challenges arise in the implementation of explainable AI in educational contexts? This paper analyzes these challenges, addressing the complexities of AI models and the diverse needs of users. It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement, ensuring that educators and students can effectively interact with these technologies. Furthermore, what steps can educators, developers, and policymakers take to create more effective, inclusive, and ethically responsible AI solutions in education? The paper provides targeted recommendations to address this question, highlighting the necessity of prioritizing explainability. By doing so, how can we leverage AI's transformative potential to foster equitable and engaging educational experiences that support diverse learners?
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Generative AI and Its Impact on Personalized Intelligent Tutoring Systems [0.0]
Generative AI enables personalized education through dynamic content generation, real-time feedback, and adaptive learning pathways.
Report explores key applications such as automated question generation, customized feedback mechanisms, and interactive dialogue systems.
Future directions highlight the potential advancements in multimodal AI integration, emotional intelligence in tutoring systems, and the ethical implications of AI-driven education.
arXiv Detail & Related papers (2024-10-14T16:01:01Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Transdisciplinary AI Education: The Confluence of Curricular and
Community Needs in the Instruction of Artificial Intelligence [0.7133676002283578]
We examine the current state of AI in education and explore the potential benefits and challenges of incorporating this technology into the classroom.
This paper delves into the AI program currently in development for Neom Community School and the larger Education, Research, and Innovation Sector in Neom, Saudi Arabia s new megacity under development.
arXiv Detail & Related papers (2023-11-10T17:26:27Z) - Towards social generative AI for education: theory, practices and ethics [0.0]
Building social generative AI for education will require development of powerful AI systems that can converse with each other as well as humans.
We need to consider how to design and constrain social generative AI for education.
arXiv Detail & Related papers (2023-06-14T17:30:48Z) - Assigning AI: Seven Approaches for Students, with Prompts [0.0]
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools.
The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student.
arXiv Detail & Related papers (2023-06-13T03:36:36Z) - What Students Can Learn About Artificial Intelligence -- Recommendations
for K-12 Computing Education [0.0]
Technological advances in the context of digital transformation are the basis for rapid developments in the field of artificial intelligence (AI)
An increasing number of computer science curricula are being extended to include the topic of AI.
This paper presents a curriculum of learning objectives that addresses digital literacy and the societal perspective in particular.
arXiv Detail & Related papers (2023-05-10T20:39:43Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.