MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences
- URL: http://arxiv.org/abs/2409.09570v1
- Date: Sun, 15 Sep 2024 01:10:46 GMT
- Title: MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences
- Authors: Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Michael V. Heinz, Ashmita Kunwar, Eunsol Soul Choi, Orson Xu, Joanna Kuc, Jeremy Huckins, Jason Holden, Sarah M. Preum, Colin Depp, Nicholas Jacobson, Mary Czerwinski, Eric Granholm, Andrew T. Campbell,
- Abstract summary: MindScape pioneers a novel approach to AI-powered journaling by integrating passively collected behavioral patterns.
This integration creates a highly personalized and context-aware journaling experience.
We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect.
- Score: 6.120545056775202
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mental health concerns are prevalent among college students, highlighting the need for effective interventions that promote self-awareness and holistic well-being. MindScape pioneers a novel approach to AI-powered journaling by integrating passively collected behavioral patterns such as conversational engagement, sleep, and location with Large Language Models (LLMs). This integration creates a highly personalized and context-aware journaling experience, enhancing self-awareness and well-being by embedding behavioral intelligence into AI. We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect (7%), reducing negative affect (11%), loneliness (6%), and anxiety and depression, with a significant week-over-week decrease in PHQ-4 scores (-0.25 coefficient), alongside improvements in mindfulness (7%) and self-reflection (6%). The study highlights the advantages of contextual AI journaling, with participants particularly appreciating the tailored prompts and insights provided by the MindScape app. Our analysis also includes a comparison of responses to AI-driven contextual versus generic prompts, participant feedback insights, and proposed strategies for leveraging contextual AI journaling to improve well-being on college campuses. By showcasing the potential of contextual AI journaling to support mental health, we provide a foundation for further investigation into the effects of contextual AI journaling on mental health and well-being.
Related papers
- ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline [0.0]
This study investigates the impact of generative artificial intelligence (AI) tools on the cognitive engagement of students during academic writing tasks.<n>The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group.<n>These findings suggest that AI assistance may lead to cognitive offloading.
arXiv Detail & Related papers (2025-06-30T18:41:50Z) - Linguistic Comparison of AI- and Human-Written Responses to Online Mental Health Queries [17.133059430285627]
generative AI and large language models (LLMs) have introduced new possibilities for scalable, around-the-clock mental health assistance.
In this study, we harnessed 24,114 posts and 138,758 online community (OC) responses from 55 online support communities on Reddit.
Our findings revealed that AI responses are more verbose, readable, and analytically structured, but lack linguistic diversity and personal narratives inherent in human-human interactions.
arXiv Detail & Related papers (2025-04-12T16:20:02Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - AI-Enhanced Cognitive Behavioral Therapy: Deep Learning and Large Language Models for Extracting Cognitive Pathways from Social Media Texts [27.240795549935463]
We gathered data from social media and established the task of extracting cognitive pathways.
We structured a text summarization task to help psychotherapists quickly grasp the essential information.
Our experiments evaluate the performance of deep learning and large language models.
arXiv Detail & Related papers (2024-04-17T14:55:27Z) - Contextual AI Journaling: Integrating LLM and Time Series Behavioral Sensing Technology to Promote Self-Reflection and Well-being using the MindScape App [6.479200977503295]
MindScape aims to study the benefits of integrating time series behavioral patterns with Large Language Models (LLMs)
We discuss the MindScape contextual journal App design that uses LLMs and behavioral sensing to generate contextual and personalized journaling prompts crafted to encourage self-reflection and emotional development.
arXiv Detail & Related papers (2024-03-30T23:01:34Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - Facilitating Self-Guided Mental Health Interventions Through Human-Language Model Interaction: A Case Study of Cognitive Restructuring [8.806947407907137]
We study how human-language model interaction can support self-guided mental health interventions.
We design and evaluate a system that uses language models to support people through various steps of cognitive restructuring.
arXiv Detail & Related papers (2023-10-24T02:23:34Z) - MindShift: Leveraging Large Language Models for Mental-States-Based
Problematic Smartphone Use Intervention [34.21508978116272]
Problematic smartphone use negatively affects physical and mental health.
Despite the wide range of prior research, existing persuasive techniques are not flexible enough to provide dynamic persuasion content.
We leveraged large language models (LLMs) to enable the automatic and dynamic generation of effective persuasion content.
We developed MindShift, a novel LLM-powered problematic smartphone use intervention technique.
arXiv Detail & Related papers (2023-09-28T17:49:03Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Handwriting and Drawing for Depression Detection: A Preliminary Study [53.11777541341063]
Short-term covid effects on mental health were a significant increase in anxiety and depressive symptoms.
The aim of this study is to use a new tool, the online handwriting and drawing analysis, to discriminate between healthy individuals and depressed patients.
arXiv Detail & Related papers (2023-02-05T22:33:49Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Mental Illness Classification on Social Media Texts using Deep Learning
and Transfer Learning [55.653944436488786]
According to the World health organization (WHO), approximately 450 million people are affected.
Mental illnesses, such as depression, anxiety, bipolar disorder, ADHD, and PTSD.
This study analyzes unstructured user data on Reddit platform and classifies five common mental illnesses: depression, anxiety, bipolar disorder, ADHD, and PTSD.
arXiv Detail & Related papers (2022-07-03T11:33:52Z) - Human-AI Collaboration Enables More Empathic Conversations in Text-based
Peer-to-Peer Mental Health Support [10.743204843534512]
We develop Hailey, an AI-in-the-loop agent that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers)
We show that our Human-AI collaboration approach leads to a 19.60% increase in conversational empathy between peers overall.
We find a larger 38.88% increase in empathy within the subsample of peer supporters who self-identify as experiencing difficulty providing support.
arXiv Detail & Related papers (2022-03-28T23:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.