Understanding the Impact of Long-Term Memory on Self-Disclosure with
Large Language Model-Driven Chatbots for Public Health Intervention
- URL: http://arxiv.org/abs/2402.11353v1
- Date: Sat, 17 Feb 2024 18:05:53 GMT
- Title: Understanding the Impact of Long-Term Memory on Self-Disclosure with
Large Language Model-Driven Chatbots for Public Health Intervention
- Authors: Eunkyung Jo, Yuin Jeong, SoHyun Park, Daniel A. Epstein, Young-Ho Kim
- Abstract summary: Large language models (LLMs) offer the potential to support public health monitoring by facilitating health disclosure through open-ended conversations.
Augmenting LLMs with long-term memory (LTM) presents an opportunity to improve engagement and self-disclosure.
- Score: 15.430380965922325
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent large language models (LLMs) offer the potential to support public
health monitoring by facilitating health disclosure through open-ended
conversations but rarely preserve the knowledge gained about individuals across
repeated interactions. Augmenting LLMs with long-term memory (LTM) presents an
opportunity to improve engagement and self-disclosure, but we lack an
understanding of how LTM impacts people's interaction with LLM-driven chatbots
in public health interventions. We examine the case of CareCall -- an
LLM-driven voice chatbot with LTM -- through the analysis of 1,252 call logs
and interviews with nine users. We found that LTM enhanced health disclosure
and fostered positive perceptions of the chatbot by offering familiarity.
However, we also observed challenges in promoting self-disclosure through LTM,
particularly around addressing chronic health conditions and privacy concerns.
We discuss considerations for LTM integration in LLM-driven chatbots for public
health monitoring, including carefully deciding what topics need to be
remembered in light of public health goals.
Related papers
- FedMentalCare: Towards Privacy-Preserving Fine-Tuned LLMs to Analyze Mental Health Status Using Federated Learning Framework [0.0]
FedCare is a privacy-preserving framework for deploying Large Language Models (LLMs) in mental healthcare applications.
Our framework demonstrates a scalable, privacy-aware approach for deploying LLMs in real-world mental healthcare scenarios.
arXiv Detail & Related papers (2025-02-27T07:04:19Z) - Private Yet Social: How LLM Chatbots Support and Challenge Eating Disorder Recovery [5.633853272693508]
Eating disorders (ED) are complex mental health conditions that require long-term management and support.
Recent advancements in large language model (LLM)-based chatbots offer the potential to assist individuals in receiving immediate support.
arXiv Detail & Related papers (2024-12-16T10:59:49Z) - Advancing Conversational Psychotherapy: Integrating Privacy, Dual-Memory, and Domain Expertise with Large Language Models [0.8563446809549775]
Mental health has become a global issue that reveals the limitations of traditional conversational psychotherapy.
We introduce SoulSpeak, a Large Language Model (LLM)-enabled chatbots designed to democratize access to psychotherapy.
arXiv Detail & Related papers (2024-12-04T03:02:46Z) - NewsInterview: a Dataset and a Playground to Evaluate LLMs' Ground Gap via Informational Interviews [65.35458530702442]
We focus on journalistic interviews, a domain rich in grounding communication and abundant in data.
We curate a dataset of 40,000 two-person informational interviews from NPR and CNN.
LLMs are significantly less likely than human interviewers to use acknowledgements and to pivot to higher-level questions.
arXiv Detail & Related papers (2024-11-21T01:37:38Z) - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health [1.8772687384996551]
Large language models (LLMs) have opened up new opportunities for transforming patient engagement in healthcare through conversational AI.
We showcase the power of LLMs in handling unstructured conversational data through four case studies.
arXiv Detail & Related papers (2024-06-19T16:02:04Z) - Can Public LLMs be used for Self-Diagnosis of Medical Conditions ? [0.0]
The development of Large Language Models (LLM) has evolved as a transformative paradigm in conversational tasks.
The widespread integration of Gemini with Google search and GPT-4.0 with Bing search has led to a shift in the trend of self-diagnosis.
We compare the performance of both the state-of-the-art GPT-4.0 and the fee Gemini model on the task of self-diagnosis.
arXiv Detail & Related papers (2024-05-18T22:43:44Z) - Retrieval Augmented Thought Process for Private Data Handling in Healthcare [53.89406286212502]
We introduce the Retrieval-Augmented Thought Process (RATP)
RATP formulates the thought generation of Large Language Models (LLMs)
On a private dataset of electronic medical records, RATP achieves 35% additional accuracy compared to in-context retrieval-augmented generation for the question-answering task.
arXiv Detail & Related papers (2024-02-12T17:17:50Z) - Benefits and Harms of Large Language Models in Digital Mental Health [40.02859683420844]
Large language models (LLMs) show promise in leading digital mental health to uncharted territory.
This article presents contemporary perspectives on the opportunities and risks posed by LLMs in the design, development, and implementation of digital mental health tools.
arXiv Detail & Related papers (2023-11-07T14:11:10Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - BianQue: Balancing the Questioning and Suggestion Ability of Health LLMs
with Multi-turn Health Conversations Polished by ChatGPT [19.502907861059604]
Large language models (LLMs) have performed well in providing general and extensive health suggestions in single-turn conversations.
We propose BianQue, a ChatGLM-based LLM finetuned with the self-constructed health conversation dataset BianQueCorpus.
arXiv Detail & Related papers (2023-10-24T14:57:34Z) - Talk2Care: Facilitating Asynchronous Patient-Provider Communication with
Large-Language-Model [29.982507402325396]
We built an LLM-powered communication system, Talk2Care, for older adults and healthcare providers.
For older adults, we leveraged the convenience and accessibility of voice assistants (VAs) and built an LLM-powered VA interface for effective information collection.
The results showed that Talk2Care could facilitate the communication process, enrich the health information collected from older adults, and considerably save providers' efforts and time.
arXiv Detail & Related papers (2023-09-17T19:46:03Z) - Siren's Song in the AI Ocean: A Survey on Hallucination in Large
Language Models [116.01843550398183]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks.
LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge.
arXiv Detail & Related papers (2023-09-03T16:56:48Z) - Privacy-preserving machine learning for healthcare: open challenges and
future perspectives [72.43506759789861]
We conduct a review of recent literature concerning Privacy-Preserving Machine Learning (PPML) for healthcare.
We primarily focus on privacy-preserving training and inference-as-a-service.
The aim of this review is to guide the development of private and efficient ML models in healthcare.
arXiv Detail & Related papers (2023-03-27T19:20:51Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.