ChaCha: Leveraging Large Language Models to Prompt Children to Share
Their Emotions about Personal Events
- URL: http://arxiv.org/abs/2309.12244v4
- Date: Mon, 19 Feb 2024 03:22:07 GMT
- Title: ChaCha: Leveraging Large Language Models to Prompt Children to Share
Their Emotions about Personal Events
- Authors: Woosuk Seo, Chanmo Yang, Young-Ho Kim
- Abstract summary: ChaCha encourages and guides children to share personal events and associated emotions.
ChaCha combines a state machine and large language models (LLMs) to keep the dialogue on track.
- Score: 6.486346903896692
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Children typically learn to identify and express emotions through sharing
their stories and feelings with others, particularly their family. However, it
is challenging for parents or siblings to have emotional communication with
children since children are still developing their communication skills. We
present ChaCha, a chatbot that encourages and guides children to share personal
events and associated emotions. ChaCha combines a state machine and large
language models (LLMs) to keep the dialogue on track while carrying on
free-form conversations. Through an exploratory study with 20 children (aged
8-12), we examine how ChaCha prompts children to share personal events and
guides them to describe associated emotions. Participants perceived ChaCha as a
close friend and shared their stories on various topics, such as family trips
and personal achievements. Based on the findings, we discuss opportunities for
leveraging LLMs to design child-friendly chatbots to support children in
sharing emotions.
Related papers
- "I am here for you": How relational conversational AI appeals to adolescents, especially those who are socially and emotionally vulnerable [2.2481339018068596]
General-purpose conversational AI chatbots and AI companions increasingly provide young adolescents with emotionally supportive conversations.<n>These findings identify conversational style as a key design lever for youth AI safety.
arXiv Detail & Related papers (2025-12-17T06:17:52Z) - AutiHero: Leveraging Generative AI in Social Narratives to Engage Parents in Story-Driven Behavioral Guidance for Autistic Children [23.438204344138597]
We present AutiHero, a generative AI-based social narrative system for behavioral guidance.<n>AutiHero supports parents to create personalized stories for their autistic children and read them together.
arXiv Detail & Related papers (2025-09-22T11:23:10Z) - Designing for Engaging Communication Between Parents and Young Adult Children Through Shared Music Experiences [6.329321597138646]
We develop DJ-Fam, a mobile application that enables parents and children to listen to their favorite songs and use them as conversation starters.<n>From our deployment study with seven families over four weeks in South Korea, we show the potential of DJ-Fam to influence parent-child interaction positively.
arXiv Detail & Related papers (2025-07-30T16:34:44Z) - AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation [17.30104178658932]
We present AACessTalk, a tablet-based, AI-mediated communication system.
It facilitates meaningful exchanges between an MVA child and a parent.
arXiv Detail & Related papers (2024-09-15T07:23:07Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - CuentosIE: can a chatbot about "tales with a message" help to teach
emotional intelligence? [0.07538606213726905]
CuentosIE is a tool to monitor students/patients through indicators and data compiled by CuentosIE.
The main contributions of CuentosIE are the selection, collection, and classification of a set of specialized tales.
The preliminary evaluation of the tool has obtained encouraging results.
arXiv Detail & Related papers (2024-03-11T22:27:16Z) - Exploring Parent's Needs for Children-Centered AI to Support Preschoolers' Interactive Storytelling and Reading Activities [52.828843153565984]
AI-based storytelling and reading technologies are becoming increasingly ubiquitous in preschoolers' lives.
This paper investigates how they function in practical storytelling and reading scenarios and, how parents, the most critical stakeholders, experience and perceive them.
Our findings suggest that even though AI-based storytelling and reading technologies provide more immersive and engaging interaction, they still cannot meet parents' expectations due to a series of interactive and algorithmic challenges.
arXiv Detail & Related papers (2024-01-24T20:55:40Z) - Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for
Children [0.0]
We propose a novel framework that integrates a fuzzy approach for the recognition of emotions through the analysis of audio and video data.
We use the FER dataset to detect facial emotions in video frames recorded from the screen during the game.
For the audio emotion recognition of sounds a kid produces during the game, we use CREMA-D, TESS, RAVDESS, and Savee datasets.
arXiv Detail & Related papers (2023-08-31T21:22:00Z) - Utterance Emotion Dynamics in Children's Poems: Emotional Changes Across
Age [29.467916405081272]
We use a lexicon and a machine learning based approach to quantify characteristics of emotion dynamics determined from poems written by children of various ages.
We find increasing emotional variability, rise rates (i.e., emotional reactivity), and recovery rates (i.e., emotional regulation) with age.
arXiv Detail & Related papers (2023-06-08T17:38:14Z) - CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset
for Conversational AI [48.67259855309959]
Most existing datasets for conversational AI ignore human personalities and emotions.
We propose CPED, a large-scale Chinese personalized and emotional dialogue dataset.
CPED contains more than 12K dialogues of 392 speakers from 40 TV shows.
arXiv Detail & Related papers (2022-05-29T17:45:12Z) - StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
Interactive Storytelling with Flexible Parental Involvement [61.47157418485633]
We developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences.
A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.
arXiv Detail & Related papers (2022-02-13T04:53:28Z) - Annotation of Emotion Carriers in Personal Narratives [69.07034604580214]
We are interested in the problem of understanding personal narratives (PN) - spoken or written - recollections of facts, events, and thoughts.
In PN, emotion carriers are the speech or text segments that best explain the emotional state of the user.
This work proposes and evaluates an annotation model for identifying emotion carriers in spoken personal narratives.
arXiv Detail & Related papers (2020-02-27T15:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.