Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
- URL: http://arxiv.org/abs/2504.16148v1
- Date: Tue, 22 Apr 2025 13:20:43 GMT
- Title: Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
- Authors: Danial Hooshyar, Gustav Šír, Yeongwook Yang, Eve Kikas, Raija Hämäläinen, Tommi Kärkkäinen, Dragan Gašević, Roger Azevedo,
- Abstract summary: This analysis identifies and examines nine persistent challenges that continue to undermine the fairness, transparency, and effectiveness of current AI methods in education.<n>These include the lack of clarity around what AI for education truly means and the trend of equating it with domain-agnostic, company-driven large language models.<n>We demonstrate how hybrid AI methods -- specifically neural-symbolic AI -- can address the elephant in the room and serve as the foundation for responsible, trustworthy AI systems in education.
- Score: 2.674109837115132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant advancements in AI-driven educational systems and ongoing calls for responsible AI for education, several critical issues remain unresolved -- acting as the elephant in the room within AI in education, learning analytics, educational data mining, learning sciences, and educational psychology communities. This critical analysis identifies and examines nine persistent challenges that continue to undermine the fairness, transparency, and effectiveness of current AI methods and applications in education. These include: (1) the lack of clarity around what AI for education truly means -- often ignoring the distinct purposes, strengths, and limitations of different AI families -- and the trend of equating it with domain-agnostic, company-driven large language models; (2) the widespread neglect of essential learning processes such as motivation, emotion, and (meta)cognition in AI-driven learner modelling and their contextual nature; (3) limited integration of domain knowledge and lack of stakeholder involvement in AI design and development; (4) continued use of non-sequential machine learning models on temporal educational data; (5) misuse of non-sequential metrics to evaluate sequential models; (6) use of unreliable explainable AI methods to provide explanations for black-box models; (7) ignoring ethical guidelines in addressing data inconsistencies during model training; (8) use of mainstream AI methods for pattern discovery and learning analytics without systematic benchmarking; and (9) overemphasis on global prescriptions while overlooking localised, student-specific recommendations. Supported by theoretical and empirical research, we demonstrate how hybrid AI methods -- specifically neural-symbolic AI -- can address the elephant in the room and serve as the foundation for responsible, trustworthy AI systems in education.
Related papers
- Synergizing Self-Regulation and Artificial-Intelligence Literacy Towards Future Human-AI Integrative Learning [92.34299949916134]
Self-regulated learning (SRL) and Artificial-Intelligence (AI) literacy are becoming key competencies for successful human-AI interactive learning.<n>This study analyzed data from 1,704 Chinese undergraduates using clustering methods to uncover four learner groups.
arXiv Detail & Related papers (2025-03-31T13:41:21Z) - World Models in Artificial Intelligence: Sensing, Learning, and Reasoning Like a Child [10.183372891207966]
World Models help Artificial Intelligence (AI) predict outcomes, reason about its environment, and guide decision-making.<n>We highlight six key research areas -- physics-informed learning, neurosymbolic learning, continual learning, causal inference, human-in-the-loop AI, and responsible AI -- as essential for enabling true reasoning in AI.
arXiv Detail & Related papers (2025-03-19T12:50:40Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Human-Centric eXplainable AI in Education [0.0]
This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape.
It emphasizes its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools.
It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement.
arXiv Detail & Related papers (2024-10-18T14:02:47Z) - The Interplay of Learning, Analytics, and Artificial Intelligence in Education: A Vision for Hybrid Intelligence [0.45207442500313766]
I challenge the prevalent narrow conceptualisation of AI as tools, and argue for the importance of alternative conceptualisations of AI.
I highlight the differences between human intelligence and artificial information processing, and posit that AI can also serve as an instrument for understanding human learning.
The paper presents three unique conceptualisations of AI: the externalization of human cognition, the internalization of AI models to influence human mental models, and the extension of human cognition via tightly coupled human-AI hybrid intelligence systems.
arXiv Detail & Related papers (2024-03-24T10:07:46Z) - From Algorithm Worship to the Art of Human Learning: Insights from 50-year journey of AI in Education [0.0]
Current discourse surrounding Artificial Intelligence (AI) oscillates between hope and apprehension.
This paper delves into the complexities of AI's role in Education, addressing the mixed messages that have both enthused and alarmed educators.
It explores the promises that AI holds for enhancing learning through personalisation at scale, against the backdrop of concerns about ethical implications.
arXiv Detail & Related papers (2024-02-05T16:12:14Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - MAILS -- Meta AI Literacy Scale: Development and Testing of an AI
Literacy Questionnaire Based on Well-Founded Competency Models and
Psychological Change- and Meta-Competencies [6.368014180870025]
The questionnaire should be modular (i.e., including different facets that can be used independently of each other) to be flexibly applicable in professional life.
We derived 60 items to represent different facets of AI Literacy according to Ng and colleagues conceptualisation of AI literacy.
Additional 12 items to represent psychological competencies such as problem solving, learning, and emotion regulation in regard to AI.
arXiv Detail & Related papers (2023-02-18T12:35:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.