Learning to Generate Context-Sensitive Backchannel Smiles for Embodied
AI Agents with Applications in Mental Health Dialogues
- URL: http://arxiv.org/abs/2402.08837v1
- Date: Tue, 13 Feb 2024 22:47:22 GMT
- Title: Learning to Generate Context-Sensitive Backchannel Smiles for Embodied
AI Agents with Applications in Mental Health Dialogues
- Authors: Maneesh Bilalpur, Mert Inan, Dorsa Zeinali, Jeffrey F. Cohn and Malihe
Alikhani
- Abstract summary: Embodied agents with advanced interactive capabilities emerge as a promising and cost-effective supplement to traditional caregiving methods.
We annotated backchannel smiles in videos of intimate face-to-face conversations over topics such as mental health, illness, and relationships.
Using cues from speech prosody and language along with the demographics of the speaker and listener, we found them to contain significant predictors of the intensity of backchannel smiles.
- Score: 21.706636640014594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Addressing the critical shortage of mental health resources for effective
screening, diagnosis, and treatment remains a significant challenge. This
scarcity underscores the need for innovative solutions, particularly in
enhancing the accessibility and efficacy of therapeutic support. Embodied
agents with advanced interactive capabilities emerge as a promising and
cost-effective supplement to traditional caregiving methods. Crucial to these
agents' effectiveness is their ability to simulate non-verbal behaviors, like
backchannels, that are pivotal in establishing rapport and understanding in
therapeutic contexts but remain under-explored. To improve the rapport-building
capabilities of embodied agents we annotated backchannel smiles in videos of
intimate face-to-face conversations over topics such as mental health, illness,
and relationships. We hypothesized that both speaker and listener behaviors
affect the duration and intensity of backchannel smiles. Using cues from speech
prosody and language along with the demographics of the speaker and listener,
we found them to contain significant predictors of the intensity of backchannel
smiles. Based on our findings, we introduce backchannel smile production in
embodied agents as a generation problem. Our attention-based generative model
suggests that listener information offers performance improvements over the
baseline speaker-centric generation approach. Conditioned generation using the
significant predictors of smile intensity provides statistically significant
improvements in empirical measures of generation quality. Our user study by
transferring generated smiles to an embodied agent suggests that agent with
backchannel smiles is perceived to be more human-like and is an attractive
alternative for non-personal conversations over agent without backchannel
smiles.
Related papers
- Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Empathy Through Multimodality in Conversational Interfaces [1.360649555639909]
Conversational Health Agents (CHAs) are redefining healthcare by offering nuanced support that transcends textual analysis to incorporate emotional intelligence.
This paper introduces an LLM-based CHA engineered for rich, multimodal dialogue-especially in the realm of mental health support.
It adeptly interprets and responds to users' emotional states by analyzing multimodal cues, thus delivering contextually aware and empathetically resonant verbal responses.
arXiv Detail & Related papers (2024-05-08T02:48:29Z) - EmoScan: Automatic Screening of Depression Symptoms in Romanized Sinhala Tweets [0.0]
This work explores the utilization of Romanized Sinhala social media data to identify individuals at risk of depression.
A machine learning-based framework is presented for the automatic screening of depression symptoms by analyzing language patterns, sentiment, and behavioural cues.
arXiv Detail & Related papers (2024-03-28T10:31:09Z) - HealMe: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy [25.908522131646258]
We unveil the Helping and Empowering through Adaptive Language in Mental Enhancement (HealMe) model.
This novel cognitive reframing therapy method effectively addresses deep-rooted negative thoughts and fosters rational, balanced perspectives.
We adopt the first comprehensive and expertly crafted psychological evaluation metrics, specifically designed to rigorously assess the performance of cognitive reframing.
arXiv Detail & Related papers (2024-02-26T09:10:34Z) - Towards Mitigating Hallucination in Large Language Models via
Self-Reflection [63.2543947174318]
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks.
This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets.
arXiv Detail & Related papers (2023-10-10T03:05:44Z) - Leveraging Implicit Feedback from Deployment Data in Dialogue [83.02878726357523]
We study improving social conversational agents by learning from natural dialogue between users and a deployed model.
We leverage signals like user response length, sentiment and reaction of the future human utterances in the collected dialogue episodes.
arXiv Detail & Related papers (2023-07-26T11:34:53Z) - TalkTive: A Conversational Agent Using Backchannels to Engage Older
Adults in Neurocognitive Disorders Screening [51.97352212369947]
We analyzed 246 conversations of cognitive assessments between older adults and human assessors.
We derived the categories of reactive backchannels and proactive backchannels.
This is used in the development of TalkTive, a CA which can predict both timing and form of backchanneling.
arXiv Detail & Related papers (2022-02-16T17:55:34Z) - Automated Quality Assessment of Cognitive Behavioral Therapy Sessions
Through Highly Contextualized Language Representations [34.670548892766625]
A BERT-based model is proposed for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT)
The model is trained in a multi-task manner in order to achieve higher interpretability.
BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements.
arXiv Detail & Related papers (2021-02-23T09:22:29Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.