Examining the Values Reflected by Children during AI Problem Formulation
- URL: http://arxiv.org/abs/2309.15839v1
- Date: Wed, 27 Sep 2023 17:58:30 GMT
- Title: Examining the Values Reflected by Children during AI Problem Formulation
- Authors: Utkarsh Dwivedi, Salma Elsayed-ali, Elizabeth Bonsignore and Hernisa
Kacorri
- Abstract summary: We find that children's proposed ideas require advanced system intelligence and understanding the social relationships of a user.
Children's ideas showed they cared about family and expected machines to understand their social context before making decisions.
- Score: 9.516294164912072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how children design and what they value in AI interfaces that
allow them to explicitly train their models such as teachable machines, could
help increase such activities' impact and guide the design of future
technologies. In a co-design session using a modified storyboard, a team of 5
children (aged 7-13 years) and adult co-designers, engaged in AI problem
formulation activities where they imagine their own teachable machines. Our
findings, leveraging an established psychological value framework (the Rokeach
Value Survey), illuminate how children conceptualize and embed their values in
AI systems that they themselves devise to support their everyday activities.
Specifically, we find that children's proposed ideas require advanced system
intelligence, e.g. emotion detection and understanding the social relationships
of a user. The underlying models could be trained under multiple modalities and
any errors would be fixed by adding more data or by anticipating negative
examples. Children's ideas showed they cared about family and expected machines
to understand their social context before making decisions.
Related papers
- Children's Mental Models of Generative Visual and Text Based AI Models [0.027961972519572442]
Children ages 5-12 perceive, understand, and use generative AI models such as a text-based LLMs ChatGPT and a visual-based model DALL-E.
We found that children generally have a very positive outlook towards AI and are excited about the ways AI may benefit and aid them in their everyday lives.
We hope that these findings will shine a light on children's mental models of AI and provide insight for how to design the best possible tools for children who will inevitably be using AI in their lifetimes.
arXiv Detail & Related papers (2024-05-21T06:18:00Z) - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cube [13.560874044962429]
Prominent ethical issues in high school AI education include data privacy, information leakage, abusive language, and fairness.
This paper describes technological components that were built to address ethical and trustworthy concerns in a multi-modal collaborative platform.
In data privacy, we want to ensure that the informed consent of children, parents, and teachers, is at the center of any data that is managed.
arXiv Detail & Related papers (2024-01-30T16:33:21Z) - Exploring Parent's Needs for Children-Centered AI to Support Preschoolers' Interactive Storytelling and Reading Activities [52.828843153565984]
AI-based storytelling and reading technologies are becoming increasingly ubiquitous in preschoolers' lives.
This paper investigates how they function in practical storytelling and reading scenarios and, how parents, the most critical stakeholders, experience and perceive them.
Our findings suggest that even though AI-based storytelling and reading technologies provide more immersive and engaging interaction, they still cannot meet parents' expectations due to a series of interactive and algorithmic challenges.
arXiv Detail & Related papers (2024-01-24T20:55:40Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Towards Goldilocks Zone in Child-centered AI [0.0]
We argue the need to understand a child's interaction process with AI.
We present several design recommendations to create value-driven interaction in child-centric AI.
arXiv Detail & Related papers (2023-03-20T15:52:33Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - From Modelling to Understanding Children's Behaviour in the Context of
Robotics and Social Artificial Intelligence [3.6017760602154576]
This workshop aims to promote a common ground among different disciplines such as developmental sciences, artificial intelligence and social robotics.
We will discuss cutting-edge research in the area of user modelling and adaptive systems for children.
arXiv Detail & Related papers (2022-10-20T10:58:42Z) - StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
Interactive Storytelling with Flexible Parental Involvement [61.47157418485633]
We developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences.
A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.
arXiv Detail & Related papers (2022-02-13T04:53:28Z) - Exploring Machine Teaching with Children [9.212643929029403]
Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence.
We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers.
arXiv Detail & Related papers (2021-09-23T15:18:53Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.