Learning to Generate Context-Sensitive Backchannel Smiles for Embodied
AI Agents with Applications in Mental Health Dialogues
- URL: http://arxiv.org/abs/2402.08837v1
- Date: Tue, 13 Feb 2024 22:47:22 GMT
- Title: Learning to Generate Context-Sensitive Backchannel Smiles for Embodied
AI Agents with Applications in Mental Health Dialogues
- Authors: Maneesh Bilalpur, Mert Inan, Dorsa Zeinali, Jeffrey F. Cohn and Malihe
Alikhani
- Abstract summary: Embodied agents with advanced interactive capabilities emerge as a promising and cost-effective supplement to traditional caregiving methods.
We annotated backchannel smiles in videos of intimate face-to-face conversations over topics such as mental health, illness, and relationships.
Using cues from speech prosody and language along with the demographics of the speaker and listener, we found them to contain significant predictors of the intensity of backchannel smiles.
- Score: 21.706636640014594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Addressing the critical shortage of mental health resources for effective
screening, diagnosis, and treatment remains a significant challenge. This
scarcity underscores the need for innovative solutions, particularly in
enhancing the accessibility and efficacy of therapeutic support. Embodied
agents with advanced interactive capabilities emerge as a promising and
cost-effective supplement to traditional caregiving methods. Crucial to these
agents' effectiveness is their ability to simulate non-verbal behaviors, like
backchannels, that are pivotal in establishing rapport and understanding in
therapeutic contexts but remain under-explored. To improve the rapport-building
capabilities of embodied agents we annotated backchannel smiles in videos of
intimate face-to-face conversations over topics such as mental health, illness,
and relationships. We hypothesized that both speaker and listener behaviors
affect the duration and intensity of backchannel smiles. Using cues from speech
prosody and language along with the demographics of the speaker and listener,
we found them to contain significant predictors of the intensity of backchannel
smiles. Based on our findings, we introduce backchannel smile production in
embodied agents as a generation problem. Our attention-based generative model
suggests that listener information offers performance improvements over the
baseline speaker-centric generation approach. Conditioned generation using the
significant predictors of smile intensity provides statistically significant
improvements in empirical measures of generation quality. Our user study by
transferring generated smiles to an embodied agent suggests that agent with
backchannel smiles is perceived to be more human-like and is an attractive
alternative for non-personal conversations over agent without backchannel
smiles.
Related papers
- Nonverbal Interaction Detection [83.40522919429337]
This work addresses a new challenge of understanding human nonverbal interaction in social contexts.
We contribute a novel large-scale dataset, called NVI, which is meticulously annotated to include bounding boxes for humans and corresponding social groups.
Second, we establish a new task NVI-DET for nonverbal interaction detection, which is formalized as identifying triplets in the form individual, group, interaction> from images.
Third, we propose a nonverbal interaction detection hypergraph (NVI-DEHR), a new approach that explicitly models high-order nonverbal interactions using hypergraphs.
arXiv Detail & Related papers (2024-07-11T02:14:06Z) - Mimicry and the Emergence of Cooperative Communication [0.6629765271909505]
Communication between agents is a critical component of cooperative multi-agent systems.
We explore the effects of when agents can mimic preexisting, externally generated useful signals.
Our results show that both evolutionary optimisation and reinforcement learning may benefit from this intervention.
arXiv Detail & Related papers (2024-05-26T16:34:52Z) - Empathy Through Multimodality in Conversational Interfaces [1.360649555639909]
Conversational Health Agents (CHAs) are redefining healthcare by offering nuanced support that transcends textual analysis to incorporate emotional intelligence.
This paper introduces an LLM-based CHA engineered for rich, multimodal dialogue-especially in the realm of mental health support.
It adeptly interprets and responds to users' emotional states by analyzing multimodal cues, thus delivering contextually aware and empathetically resonant verbal responses.
arXiv Detail & Related papers (2024-05-08T02:48:29Z) - EmoScan: Automatic Screening of Depression Symptoms in Romanized Sinhala Tweets [0.0]
This work explores the utilization of Romanized Sinhala social media data to identify individuals at risk of depression.
A machine learning-based framework is presented for the automatic screening of depression symptoms by analyzing language patterns, sentiment, and behavioural cues.
arXiv Detail & Related papers (2024-03-28T10:31:09Z) - HealMe: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy [25.908522131646258]
We unveil the Helping and Empowering through Adaptive Language in Mental Enhancement (HealMe) model.
This novel cognitive reframing therapy method effectively addresses deep-rooted negative thoughts and fosters rational, balanced perspectives.
We adopt the first comprehensive and expertly crafted psychological evaluation metrics, specifically designed to rigorously assess the performance of cognitive reframing.
arXiv Detail & Related papers (2024-02-26T09:10:34Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Leveraging Implicit Feedback from Deployment Data in Dialogue [83.02878726357523]
We study improving social conversational agents by learning from natural dialogue between users and a deployed model.
We leverage signals like user response length, sentiment and reaction of the future human utterances in the collected dialogue episodes.
arXiv Detail & Related papers (2023-07-26T11:34:53Z) - TalkTive: A Conversational Agent Using Backchannels to Engage Older
Adults in Neurocognitive Disorders Screening [51.97352212369947]
We analyzed 246 conversations of cognitive assessments between older adults and human assessors.
We derived the categories of reactive backchannels and proactive backchannels.
This is used in the development of TalkTive, a CA which can predict both timing and form of backchanneling.
arXiv Detail & Related papers (2022-02-16T17:55:34Z) - Automated Quality Assessment of Cognitive Behavioral Therapy Sessions
Through Highly Contextualized Language Representations [34.670548892766625]
A BERT-based model is proposed for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT)
The model is trained in a multi-task manner in order to achieve higher interpretability.
BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements.
arXiv Detail & Related papers (2021-02-23T09:22:29Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.