"Alexa doesn't have that many feelings": Children's understanding of AI
through interactions with smart speakers in their homes
- URL: http://arxiv.org/abs/2305.05597v1
- Date: Tue, 9 May 2023 16:39:34 GMT
- Title: "Alexa doesn't have that many feelings": Children's understanding of AI
through interactions with smart speakers in their homes
- Authors: Valentina Andries and Judy Robertson
- Abstract summary: Children's understanding of AI-supported technology has educational implications.
Findings will enable educators to develop appropriate materials to address the pressing need for AI literacy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As voice-based Conversational Assistants (CAs), including Alexa, Siri, Google
Home, have become commonly embedded in households, many children now routinely
interact with Artificial Intelligence (AI) systems. It is important to research
children's experiences with consumer devices which use AI techniques because
these shape their understanding of AI and its capabilities. We conducted a
mixed-methods study (questionnaires and interviews) with primary-school
children aged 6-11 in Scotland to establish children's understanding of how
voice-based CAs work, how they perceive their cognitive abilities, agency and
other human-like qualities, their awareness and trust of privacy aspects when
using CAs and what they perceive as appropriate verbal interactions with CAs.
Most children overestimated the CAs' intelligence and were uncertain about the
systems' feelings or agency. They also lacked accurate understanding of data
privacy and security aspects, and believed it was wrong to be rude to
conversational assistants. Exploring children's current understanding of
AI-supported technology has educational implications; such findings will enable
educators to develop appropriate materials to address the pressing need for AI
literacy.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Human-Centric eXplainable AI in Education [0.0]
This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape.
It emphasizes its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools.
It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement.
arXiv Detail & Related papers (2024-10-18T14:02:47Z) - Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Generative AI in Education: A Study of Educators' Awareness, Sentiments, and Influencing Factors [2.217351976766501]
This study delves into university instructors' experiences and attitudes toward AI language models.
We find no correlation between teaching style and attitude toward generative AI.
While CS educators show far more confidence in their technical understanding of generative AI tools, they show no more confidence in their ability to detect AI-generated work.
arXiv Detail & Related papers (2024-03-22T19:21:29Z) - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cube [13.560874044962429]
Prominent ethical issues in high school AI education include data privacy, information leakage, abusive language, and fairness.
This paper describes technological components that were built to address ethical and trustworthy concerns in a multi-modal collaborative platform.
In data privacy, we want to ensure that the informed consent of children, parents, and teachers, is at the center of any data that is managed.
arXiv Detail & Related papers (2024-01-30T16:33:21Z) - Exploring Parent's Needs for Children-Centered AI to Support Preschoolers' Interactive Storytelling and Reading Activities [52.828843153565984]
AI-based storytelling and reading technologies are becoming increasingly ubiquitous in preschoolers' lives.
This paper investigates how they function in practical storytelling and reading scenarios and, how parents, the most critical stakeholders, experience and perceive them.
Our findings suggest that even though AI-based storytelling and reading technologies provide more immersive and engaging interaction, they still cannot meet parents' expectations due to a series of interactive and algorithmic challenges.
arXiv Detail & Related papers (2024-01-24T20:55:40Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.