Cognitive Reframing of Negative Thoughts through Human-Language Model
Interaction
- URL: http://arxiv.org/abs/2305.02466v1
- Date: Thu, 4 May 2023 00:12:52 GMT
- Title: Cognitive Reframing of Negative Thoughts through Human-Language Model
Interaction
- Authors: Ashish Sharma, Kevin Rushton, Inna Wanyin Lin, David Wadden, Khendra
G. Lucas, Adam S. Miner, Theresa Nguyen, Tim Althoff
- Abstract summary: We conduct a human-centered study of how language models may assist people in reframing negative thoughts.
Based on literature, we define a framework of seven linguistic attributes that can be used to reframe a thought.
We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model.
- Score: 7.683627834905736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A proven therapeutic technique to overcome negative thoughts is to replace
them with a more hopeful "reframed thought." Although therapy can help people
practice and learn this Cognitive Reframing of Negative Thoughts, clinician
shortages and mental health stigma commonly limit people's access to therapy.
In this paper, we conduct a human-centered study of how language models may
assist people in reframing negative thoughts. Based on psychology literature,
we define a framework of seven linguistic attributes that can be used to
reframe a thought. We develop automated metrics to measure these attributes and
validate them with expert judgements from mental health practitioners. We
collect a dataset of 600 situations, thoughts and reframes from practitioners
and use it to train a retrieval-enhanced in-context learning model that
effectively generates reframed thoughts and controls their linguistic
attributes. To investigate what constitutes a "high-quality" reframe, we
conduct an IRB-approved randomized field study on a large mental health website
with over 2,000 participants. Amongst other findings, we show that people
prefer highly empathic or specific reframes, as opposed to reframes that are
overly positive. Our findings provide key implications for the use of LMs to
assist people in overcoming negative thoughts.
Related papers
- MentalArena: Self-play Training of Language Models for Diagnosis and Treatment of Mental Health Disorders [59.515827458631975]
Mental health disorders are one of the most serious diseases in the world.
Privacy concerns limit the accessibility of personalized treatment data.
MentalArena is a self-play framework to train language models.
arXiv Detail & Related papers (2024-10-09T13:06:40Z) - Therapy as an NLP Task: Psychologists' Comparison of LLMs and Human Peers in CBT [6.812247730094931]
We investigate the potential and limitations of using large language models (LLMs) as providers of evidence-based therapy.
We replicated publicly accessible mental health conversations rooted in Cognitive Behavioral Therapy (CBT) to compare session dynamics and counselor's CBT-based behaviors.
Our findings show that the peer sessions are characterized by empathy, small talk, therapeutic alliance, and shared experiences but often exhibit therapist drift.
arXiv Detail & Related papers (2024-09-03T19:19:13Z) - Are Large Language Models Possible to Conduct Cognitive Behavioral Therapy? [13.0263170692984]
Large language models (LLMs) have been validated, providing new possibilities for psychological assistance therapy.
Many concerns have been raised by mental health experts regarding the use of LLMs for therapy.
Four LLM variants with excellent performance on natural language processing are evaluated.
arXiv Detail & Related papers (2024-07-25T03:01:47Z) - Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided [38.11184388388781]
Large language models (LLMs) have offered new opportunities for emotional support.
This work takes a first step by engaging with cognitive reappraisals.
We conduct a first-of-its-kind expert evaluation of an LLM's zero-shot ability to generate cognitive reappraisal responses.
arXiv Detail & Related papers (2024-04-01T17:56:30Z) - Socratic Reasoning Improves Positive Text Rewriting [60.56097569286398]
textscSocraticReframe uses a sequence of question-answer pairs to rationalize the thought rewriting process.
We show that Socratic rationales significantly improve positive text rewriting according to both automatic and human evaluations guided by criteria from psychotherapy research.
arXiv Detail & Related papers (2024-03-05T15:05:06Z) - HealMe: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy [25.908522131646258]
We unveil the Helping and Empowering through Adaptive Language in Mental Enhancement (HealMe) model.
This novel cognitive reframing therapy method effectively addresses deep-rooted negative thoughts and fosters rational, balanced perspectives.
We adopt the first comprehensive and expertly crafted psychological evaluation metrics, specifically designed to rigorously assess the performance of cognitive reframing.
arXiv Detail & Related papers (2024-02-26T09:10:34Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - Facilitating Self-Guided Mental Health Interventions Through Human-Language Model Interaction: A Case Study of Cognitive Restructuring [8.806947407907137]
We study how human-language model interaction can support self-guided mental health interventions.
We design and evaluate a system that uses language models to support people through various steps of cognitive restructuring.
arXiv Detail & Related papers (2023-10-24T02:23:34Z) - Empowering Psychotherapy with Large Language Models: Cognitive
Distortion Detection through Diagnosis of Thought Prompting [82.64015366154884]
We study the task of cognitive distortion detection and propose the Diagnosis of Thought (DoT) prompting.
DoT performs diagnosis on the patient's speech via three stages: subjectivity assessment to separate the facts and the thoughts; contrastive reasoning to elicit the reasoning processes supporting and contradicting the thoughts; and schema analysis to summarize the cognition schemas.
Experiments demonstrate that DoT obtains significant improvements over ChatGPT for cognitive distortion detection, while generating high-quality rationales approved by human experts.
arXiv Detail & Related papers (2023-10-11T02:47:21Z) - Inducing anxiety in large language models can induce bias [47.85323153767388]
We focus on twelve established large language models (LLMs) and subject them to a questionnaire commonly used in psychiatry.
Our results show that six of the latest LLMs respond robustly to the anxiety questionnaire, producing comparable anxiety scores to humans.
Anxiety-induction not only influences LLMs' scores on an anxiety questionnaire but also influences their behavior in a previously-established benchmark measuring biases such as racism and ageism.
arXiv Detail & Related papers (2023-04-21T16:29:43Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.