Funhouse Mirror or Echo Chamber? A Methodological Approach to Teaching Critical AI Literacy Through Metaphors
- URL: http://arxiv.org/abs/2411.14730v1
- Date: Fri, 22 Nov 2024 05:01:11 GMT
- Title: Funhouse Mirror or Echo Chamber? A Methodological Approach to Teaching Critical AI Literacy Through Metaphors
- Authors: Jasper Roe, Leon Furze, Mike Perkins,
- Abstract summary: This study proposes a methodological approach combining Conceptual Metaphor Theory (CMT) with UNESCO's AI competency framework to develop Critical AI Literacy (CAIL)
We identify and suggest four key metaphors for teaching CAIL. This includes GenAI as an echo chamber, GenAI as a funhouse mirror, GenAI as a black box magician, and GenAI as a map.
- Score: 0.0
- License:
- Abstract: As educational institutions grapple with teaching students about increasingly complex Artificial Intelligence (AI) systems, finding effective methods for explaining these technologies and their societal implications remains a major challenge. This study proposes a methodological approach combining Conceptual Metaphor Theory (CMT) with UNESCO's AI competency framework to develop Critical AI Literacy (CAIL). Through a systematic analysis of metaphors commonly used to describe AI systems, we develop criteria for selecting pedagogically appropriate metaphors and demonstrate their alignment with established AI literacy competencies, as well as UNESCO's AI competency framework. Our method identifies and suggests four key metaphors for teaching CAIL. This includes GenAI as an echo chamber, GenAI as a funhouse mirror, GenAI as a black box magician, and GenAI as a map. Each of these seeks to address specific aspects of understanding characteristics of AI, from filter bubbles to algorithmic opacity. We present these metaphors alongside interactive activities designed to engage students in experiential learning of AI concepts. In doing so, we offer educators a structured approach to teaching CAIL that bridges technical understanding with societal implications. This work contributes to the growing field of AI education by demonstrating how carefully selected metaphors can make complex technological concepts more accessible while promoting critical engagement with AI systems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Human-Centric eXplainable AI in Education [0.0]
This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape.
It emphasizes its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools.
It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement.
arXiv Detail & Related papers (2024-10-18T14:02:47Z) - AI Literacy for All: Adjustable Interdisciplinary Socio-technical Curriculum [0.8879149917735942]
This paper presents a curriculum, "AI Literacy for All," to promote an interdisciplinary understanding of AI.
The paper presents four pillars of AI literacy: understanding the scope and technical dimensions of AI, learning how to interact with Gen-AI in an informed and responsible way, the socio-technical issues of ethical and responsible AI, and the social and future implications of AI.
arXiv Detail & Related papers (2024-09-02T13:13:53Z) - Artificial Intelligence from Idea to Implementation. How Can AI Reshape the Education Landscape? [0.0]
The paper shows how AI technologies have moved from theoretical constructs to practical tools that are reshaping pedagogical approaches and student engagement.
The essay concludes by discussing the prospects of AI in education, emphasizing the need for a balanced approach that considers both technological advancements and societal implications.
arXiv Detail & Related papers (2024-07-14T04:40:16Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for
Developing Critical AI Literacies [0.9012198585960443]
This study explores how discussing metaphors for AI can help build awareness of the frames that shape our understanding of AI systems.
We analyzed metaphors from a range of sources, and reflected on them individually according to seven questions.
We explored each metaphor along the dimension whether or not it was promoting anthropomorphizing, and to what extent such metaphors imply that AI is sentient.
arXiv Detail & Related papers (2024-01-15T15:15:48Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Building Human-like Communicative Intelligence: A Grounded Perspective [1.0152838128195465]
After making astounding progress in language learning, AI systems seem to approach the ceiling that does not reflect important aspects of human communicative capacities.
This paper suggests that the dominant cognitively-inspired AI directions, based on nativist and symbolic paradigms, lack necessary substantiation and concreteness to guide progress in modern AI.
I propose a list of concrete, implementable components for building "grounded" linguistic intelligence.
arXiv Detail & Related papers (2022-01-02T01:43:24Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.