Wearable Meets LLM for Stress Management: A Duoethnographic Study Integrating Wearable-Triggered Stressors and LLM Chatbots for Personalized Interventions
- URL: http://arxiv.org/abs/2502.17650v1
- Date: Mon, 24 Feb 2025 20:56:23 GMT
- Title: Wearable Meets LLM for Stress Management: A Duoethnographic Study Integrating Wearable-Triggered Stressors and LLM Chatbots for Personalized Interventions
- Authors: Sameer Neupane, Poorvesh Dongre, Denis Gracanin, Santosh Kumar,
- Abstract summary: Two researchers interacted with custom chatbots over 22 days, responding to wearable-detected physiological prompts and recording stressor phrases.<n>They recorded their experiences in autoethnographic diaries and analyzed them during weekly discussions.<n>Results showed that even though most events triggered by the wearable were meaningful, only one in five warranted an intervention.
- Score: 1.4808975406270157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We use a duoethnographic approach to study how wearable-integrated LLM chatbots can assist with personalized stress management, addressing the growing need for immediacy and tailored interventions. Two researchers interacted with custom chatbots over 22 days, responding to wearable-detected physiological prompts, recording stressor phrases, and using them to seek tailored interventions from their LLM-powered chatbots. They recorded their experiences in autoethnographic diaries and analyzed them during weekly discussions, focusing on the relevance, clarity, and impact of chatbot-generated interventions. Results showed that even though most events triggered by the wearable were meaningful, only one in five warranted an intervention. It also showed that interventions tailored with brief event descriptions were more effective than generic ones. By examining the intersection of wearables and LLM, this research contributes to developing more effective, user-centric mental health tools for real-time stress relief and behavior change.
Related papers
- Exposure to Content Written by Large Language Models Can Reduce Stigma Around Opioid Use Disorder in Online Communities [19.149341014846573]
Widespread stigma acts as a barrier to harm reduction efforts in the context of opioid use disorder (OUD)
This study examines whether large language models (LLMs) can help abate OUD-related stigma in online communities.
arXiv Detail & Related papers (2025-04-08T18:20:17Z) - LlaMADRS: Prompting Large Language Models for Interview-Based Depression Assessment [75.44934940580112]
This study introduces LlaMADRS, a novel framework leveraging open-source Large Language Models (LLMs) to automate depression severity assessment.<n>We employ a zero-shot prompting strategy with carefully designed cues to guide the model in interpreting and scoring transcribed clinical interviews.<n>Our approach, tested on 236 real-world interviews, demonstrates strong correlations with clinician assessments.
arXiv Detail & Related papers (2025-01-07T08:49:04Z) - Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Prompt Engineering a Schizophrenia Chatbot: Utilizing a Multi-Agent Approach for Enhanced Compliance with Prompt Instructions [0.0699049312989311]
Patients with schizophrenia often present with cognitive impairments that may hinder their ability to learn about their condition.
While Large Language Models (LLMs) have the potential to make topical mental health information more accessible and engaging, their black-box nature raises concerns about ethics and safety.
arXiv Detail & Related papers (2024-10-10T09:49:24Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Improving Engagement and Efficacy of mHealth Micro-Interventions for Stress Coping: an In-The-Wild Study [4.704094564944504]
The Personalized Context-aware intervention selection algorithm improves engagement and efficacy of mHealth interventions.
Even brief, one-minute interventions can significantly reduce perceived stress levels.
Our study contributes to the literature by introducing a personalized context-aware intervention selection algorithm.
arXiv Detail & Related papers (2024-07-16T11:22:22Z) - Large Language Model Agents for Improving Engagement with Behavior Change Interventions: Application to Digital Mindfulness [17.055863270116333]
Large Language Models show promise in providing human-like dialogues that could emulate social support.
We conducted two randomized experiments to assess the impact of LLM agents on user engagement with mindfulness exercises.
arXiv Detail & Related papers (2024-07-03T15:43:16Z) - LLM-based Conversational AI Therapist for Daily Functioning Screening and Psychotherapeutic Intervention via Everyday Smart Devices [7.43530731987025]
We propose a Conversational AI Therapist with psychotherapeutic Interventions (CaiTI)
CaiTI can screen the day-to-day functioning using natural and psychotherapeutic conversations.
When the user needs further attention during the conversation, CaiTI can provide conversational psychotherapeutic interventions.
arXiv Detail & Related papers (2024-03-16T02:48:50Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Evaluation of In-Person Counseling Strategies To Develop Physical
Activity Chatbot for Women [31.20917921863815]
This work introduces an intervention conversation dataset collected from a real-world physical activity intervention program for women.
We designed comprehensive annotation schemes in four dimensions (domain, strategy, social exchange, and task-focused exchange) and annotated a subset of dialogs.
To understand how human intervention induces effective behavior changes, we analyzed the relationships between the intervention strategies and the participants' changes in the barrier and social support for physical activity.
arXiv Detail & Related papers (2021-07-22T00:39:21Z) - Structural Pre-training for Dialogue Comprehension [51.215629336320305]
We present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.
To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives.
Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
arXiv Detail & Related papers (2021-05-23T15:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.