Integrating Physiological Data with Large Language Models for Empathic Human-AI Interaction
- URL: http://arxiv.org/abs/2404.15351v1
- Date: Sun, 14 Apr 2024 23:40:00 GMT
- Title: Integrating Physiological Data with Large Language Models for Empathic Human-AI Interaction
- Authors: Poorvesh Dongre, Majid Behravan, Kunal Gupta, Mark Billinghurst, Denis Gračanin,
- Abstract summary: This paper explores enhancing empathy in Large Language Models (LLMs) by integrating them with physiological data.
We propose a physiological computing approach that includes developing deep learning models that use physiological data for recognizing psychological states.
- Score: 19.259030493272345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores enhancing empathy in Large Language Models (LLMs) by integrating them with physiological data. We propose a physiological computing approach that includes developing deep learning models that use physiological data for recognizing psychological states and integrating the predicted states with LLMs for empathic interaction. We showcase the application of this approach in an Empathic LLM (EmLLM) chatbot for stress monitoring and control. We also discuss the results of a pilot study that evaluates this EmLLM chatbot based on its ability to accurately predict user stress, provide human-like responses, and assess the therapeutic alliance with the user.
Related papers
- Integrating Reinforcement Learning and AI Agents for Adaptive Robotic Interaction and Assistance in Dementia Care [5.749791442522375]
This study explores a novel approach to advancing dementia care by integrating socially assistive robotics, reinforcement learning (RL), large language models (LLMs), and clinical domain expertise within a simulated environment.
arXiv Detail & Related papers (2025-01-28T06:38:24Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Assessing Empathy in Large Language Models with Real-World Physician-Patient Interactions [9.327472312657392]
The integration of Large Language Models (LLMs) into the healthcare domain has the potential to significantly enhance patient care and support.
This study investigates the question Can ChatGPT respond with a greater degree of empathy than those typically offered by physicians?
We collect a de-identified dataset of patient messages and physician responses from Mayo Clinic and generate alternative replies using ChatGPT.
arXiv Detail & Related papers (2024-05-26T01:58:57Z) - Probabilistic emotion and sentiment modelling of patient-reported
experiences [0.04096453902709291]
This study introduces a novel methodology for modelling patient emotions from online patient experience narratives.
We employ metadata network topic modelling to analyse patient-reported experiences from Care Opinion.
We develop a probabilistic, context-specific emotion recommender system capable of predicting both multilabel emotions and binary sentiments.
arXiv Detail & Related papers (2024-01-09T05:39:20Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - Synthesizing Affective Neurophysiological Signals Using Generative
Models: A Review Paper [28.806992102323324]
The integration of emotional intelligence in machines is an important step in advancing human-computer interaction.
The scarcity of public affective datasets presents a challenge.
We emphasize the use of generative models to address this issue in neurophysiological signals.
arXiv Detail & Related papers (2023-06-05T08:38:30Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - Embodied Synaptic Plasticity with Online Reinforcement learning [5.6006805285925445]
This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields.
We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks.
arXiv Detail & Related papers (2020-03-03T10:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.