Contextual AI Journaling: Integrating LLM and Time Series Behavioral Sensing Technology to Promote Self-Reflection and Well-being using the MindScape App
- URL: http://arxiv.org/abs/2404.00487v1
- Date: Sat, 30 Mar 2024 23:01:34 GMT
- Title: Contextual AI Journaling: Integrating LLM and Time Series Behavioral Sensing Technology to Promote Self-Reflection and Well-being using the MindScape App
- Authors: Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Eunsol Soul Choi, Orson Xu, Joanna Kuc, Jeremy Huckins, Jason Holden, Colin Depp, Nicholas Jacobson, Mary Czerwinski, Eric Granholm, Andrew T. Campbell,
- Abstract summary: MindScape aims to study the benefits of integrating time series behavioral patterns with Large Language Models (LLMs)
We discuss the MindScape contextual journal App design that uses LLMs and behavioral sensing to generate contextual and personalized journaling prompts crafted to encourage self-reflection and emotional development.
- Score: 6.479200977503295
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: MindScape aims to study the benefits of integrating time series behavioral patterns (e.g., conversational engagement, sleep, location) with Large Language Models (LLMs) to create a new form of contextual AI journaling, promoting self-reflection and well-being. We argue that integrating behavioral sensing in LLMs will likely lead to a new frontier in AI. In this Late-Breaking Work paper, we discuss the MindScape contextual journal App design that uses LLMs and behavioral sensing to generate contextual and personalized journaling prompts crafted to encourage self-reflection and emotional development. We also discuss the MindScape study of college students based on a preliminary user study and our upcoming study to assess the effectiveness of contextual AI journaling in promoting better well-being on college campuses. MindScape represents a new application class that embeds behavioral intelligence in AI.
Related papers
- SIV-Bench: A Video Benchmark for Social Interaction Understanding and Reasoning [53.16179295245888]
We introduce SIV-Bench, a novel video benchmark for evaluating the capabilities of Multimodal Large Language Models (MLLMs) across Social Scene Understanding (SSU), Social State Reasoning (SSR), and Social Dynamics Prediction (SDP)<n>SIV-Bench features 2,792 video clips and 8,792 meticulously generated question-answer pairs derived from a human-LLM collaborative pipeline.<n>It also includes a dedicated setup for analyzing the impact of different textual cues-original on-screen text, added dialogue, or no text.
arXiv Detail & Related papers (2025-06-05T05:51:35Z) - SocialEval: Evaluating Social Intelligence of Large Language Models [70.90981021629021]
Social Intelligence (SI) equips humans with interpersonal abilities to behave wisely in navigating social interactions to achieve social goals.<n>This presents an operational evaluation paradigm: outcome-oriented goal achievement evaluation and process-oriented interpersonal ability evaluation.<n>We propose SocialEval, a script-based bilingual SI benchmark, integrating outcome- and process-oriented evaluation by manually crafting narrative scripts.
arXiv Detail & Related papers (2025-06-01T08:36:51Z) - TimeHC-RL: Temporal-aware Hierarchical Cognitive Reinforcement Learning for Enhancing LLMs' Social Intelligence [62.21106561772784]
We introduce Temporal-aware Hierarchical Cognitive Reinforcement Learning (TimeHC-RL) for enhancing Large Language Models' social intelligence.<n> Experimental results reveal the superiority of our proposed TimeHC-RL method compared to the widely adopted System 2 RL method.<n>It gives the 7B backbone model wings, enabling it to rival the performance of advanced models like DeepSeek-R1 and OpenAI-O3.
arXiv Detail & Related papers (2025-05-30T12:01:06Z) - Supporting Students' Reading and Cognition with AI [12.029238454394445]
We analyzed text from 124 sessions with AI tools to understand users' reading processes and cognitive engagement.
We propose design implications for future AI reading-support systems, including structured scaffolds for lower-level cognitive tasks.
We advocate for adaptive, human-in-the-loop features that allow students and instructors to tailor their reading experiences with AI.
arXiv Detail & Related papers (2025-04-07T17:51:27Z) - Building Knowledge from Interactions: An LLM-Based Architecture for Adaptive Tutoring and Social Reasoning [42.09560737219404]
Large Language Models show promise in human-like communication, but their standalone use is hindered by memory constraints and contextual incoherence.
This work presents a multimodal, cognitively inspired framework that enhances LLM-based autonomous decision-making in social and task-oriented Human-Robot Interaction.
To further enhance autonomy and personalization, we introduce a memory system for selecting, storing and retrieving experiences.
arXiv Detail & Related papers (2025-04-02T10:45:41Z) - The StudyChat Dataset: Student Dialogues With ChatGPT in an Artificial Intelligence Course [2.1485350418225244]
textbfStudyChat is a publicly available dataset capturing real-world student interactions with an LLM-powered tutor.
We deploy a web application that replicates ChatGPT's core functionalities, and use it to log student interactions with the LLM.
We analyze these interactions, highlight behavioral trends, and analyze how specific usage patterns relate to course outcomes.
arXiv Detail & Related papers (2025-03-11T00:17:07Z) - Towards Anthropomorphic Conversational AI Part I: A Practical Framework [49.62013440962072]
We introduce a multi- module framework designed to replicate the key aspects of human intelligence involved in conversations.
In the second stage of our approach, these conversational data, after filtering and labeling, can serve as training and testing data for reinforcement learning.
arXiv Detail & Related papers (2025-02-28T03:18:39Z) - MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences [6.120545056775202]
MindScape pioneers a novel approach to AI-powered journaling by integrating passively collected behavioral patterns.
This integration creates a highly personalized and context-aware journaling experience.
We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect.
arXiv Detail & Related papers (2024-09-15T01:10:46Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Modeling Emotions and Ethics with Large Language Models [2.5200794639628032]
We first model eight fundamental human emotions, presented as opposing pairs, and employ collaborative LLMs to reinterpret and express these emotions.
Our focus extends to embedding a latent ethical dimension within LLMs, guided by a novel self-supervised learning algorithm with human feedback.
arXiv Detail & Related papers (2024-04-15T05:30:26Z) - Spontaneous Theory of Mind for Artificial Intelligence [2.7624021966289605]
We argue for a principled approach to studying and developing AI Theory of Mind (ToM)
We suggest that a robust, or general, ASI will respond to prompts textitand spontaneously engage in social reasoning.
arXiv Detail & Related papers (2024-02-16T22:41:13Z) - Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory [8.439724621886779]
The development of Large Language Models (LLMs) provides human-centered Artificial General Intelligence (AGI) with a glimmer of hope.
Empathy serves as a key emotional attribute of humanity, playing an irreplaceable role in human-centered AGI.
In this paper, we design an innovative encoder module inspired by self-presentation theory in sociology, which specifically processes sensibility and rationality sentences in dialogues.
arXiv Detail & Related papers (2023-12-14T07:38:12Z) - Creativity Support in the Age of Large Language Models: An Empirical
Study Involving Emerging Writers [33.3564201174124]
We investigate the utility of modern large language models in assisting professional writers via an empirical user study.
We find that while writers seek LLM's help across all three types of cognitive activities, they find LLMs more helpful in translation and reviewing.
arXiv Detail & Related papers (2023-09-22T01:49:36Z) - Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps [59.648646222905235]
We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
arXiv Detail & Related papers (2023-09-10T13:06:45Z) - LLM as A Robotic Brain: Unifying Egocentric Memory and Control [77.0899374628474]
Embodied AI focuses on the study and development of intelligent systems that possess a physical or virtual embodiment (i.e. robots)
Memory and control are the two essential parts of an embodied system and usually require separate frameworks to model each of them.
We propose a novel framework called LLM-Brain: using Large-scale Language Model as a robotic brain to unify egocentric memory and control.
arXiv Detail & Related papers (2023-04-19T00:08:48Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.