Crisp: Cognitive Restructuring of Negative Thoughts through Multi-turn Supportive Dialogues
- URL: http://arxiv.org/abs/2504.17238v1
- Date: Thu, 24 Apr 2025 04:22:00 GMT
- Title: Crisp: Cognitive Restructuring of Negative Thoughts through Multi-turn Supportive Dialogues
- Authors: Jinfeng Zhou, Yuxuan Chen, Jianing Yin, Yongkang Huang, Yihan Shi, Xikun Zhang, Libiao Peng, Rongsheng Zhang, Tangjie Lv, Zhipeng Hu, Hongning Wang, Minlie Huang,
- Abstract summary: Cognitive Restructuring (CR) is a psychotherapeutic process aimed at identifying and restructuring an individual's negative thoughts.<n>Existing efforts implement CR via simple text rewriting, fixed-pattern dialogues, or a one-shot CR workflow.<n>We propose CRDial, a novel framework for CR, which creates multi-turn dialogues with specifically designed identification and restructuring stages.
- Score: 75.16593367473259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cognitive Restructuring (CR) is a psychotherapeutic process aimed at identifying and restructuring an individual's negative thoughts, arising from mental health challenges, into more helpful and positive ones via multi-turn dialogues. Clinician shortage and stigma urge the development of human-LLM interactive psychotherapy for CR. Yet, existing efforts implement CR via simple text rewriting, fixed-pattern dialogues, or a one-shot CR workflow, failing to align with the psychotherapeutic process for effective CR. To address this gap, we propose CRDial, a novel framework for CR, which creates multi-turn dialogues with specifically designed identification and restructuring stages of negative thoughts, integrates sentence-level supportive conversation strategies, and adopts a multi-channel loop mechanism to enable iterative CR. With CRDial, we distill Crisp, a large-scale and high-quality bilingual dialogue dataset, from LLM. We then train Crispers, Crisp-based conversational LLMs for CR, at 7B and 14B scales. Extensive human studies show the superiority of Crispers in pointwise, pairwise, and intervention evaluations.
Related papers
- Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks [68.49251303172674]
State-of-the-art large language models (LLMs) exhibit impressive problem-solving capabilities but may struggle with complex reasoning and factual correctness.
Existing methods harness the strengths of chain-of-thought and retrieval-augmented generation (RAG) to decompose a complex problem into simpler steps and apply retrieval to improve factual correctness.
We introduce Critic-guided planning with Retrieval-augmentation, CR-Planner, a novel framework that leverages fine-tuned critic models to guide both reasoning and retrieval processes through planning.
arXiv Detail & Related papers (2024-10-02T11:26:02Z) - Evaluating Task-Oriented Dialogue Consistency through Constraint Satisfaction [1.4272411349249625]
We propose to conceptualize dialogue consistency as a Constraint Satisfaction Problem (CSP)
We utilize a CSP solver to detect inconsistencies in dialogues re-lexicalized by an LLM.
We argue that CSP captures core properties of dialogue consistency that have been poorly considered by approaches based on component pipelines.
arXiv Detail & Related papers (2024-07-16T15:38:41Z) - Cohesive Conversations: Enhancing Authenticity in Multi-Agent Simulated Dialogues [17.38671584773247]
This paper investigates the quality of multi-agent dialogues in simulations powered by Large Language Models (LLMs)
We propose a novel Screening, Diagnosis, and Regeneration (SDR) framework that detects and corrects utterance errors.
arXiv Detail & Related papers (2024-07-13T14:24:45Z) - Data Augmentation of Multi-turn Psychological Dialogue via Knowledge-driven Progressive Thought Prompting [46.919537239016734]
Large language models (LLMs) have simplified the implementation of multi-turn dialogues.
It remains challenging to deliver satisfactory performance in low-resource domain, like psychological dialogue dialogue.
We propose a knowledge-driven progressive thought prompting method to guide LLM to generate psychology-related dialogue.
arXiv Detail & Related papers (2024-06-24T12:02:56Z) - CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling [27.193022503592342]
We propose CPsyCoun, a report-based multi-turn dialogue reconstruction and evaluation framework for Chinese psychological counseling.
To fully exploit psychological counseling reports, a two-phase approach is devised to construct high-quality dialogues.
A comprehensive evaluation benchmark is developed for the effective automatic evaluation of multi-turn psychological consultations.
arXiv Detail & Related papers (2024-05-26T05:18:00Z) - Socratic Reasoning Improves Positive Text Rewriting [49.32560132826547]
Reframing a negative into a positive thought is at the crux of several cognitive approaches to mental health and psychotherapy.<n>We develop a novel framework called textscSocraticReframe to rationalize the thought rewriting process.
arXiv Detail & Related papers (2024-03-05T15:05:06Z) - JoTR: A Joint Transformer and Reinforcement Learning Framework for
Dialog Policy Learning [53.83063435640911]
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
We introduce a novel framework, JoTR, to generate flexible dialogue actions.
Unlike traditional methods, JoTR formulates a word-level policy that allows for a more dynamic and adaptable dialogue action generation.
arXiv Detail & Related papers (2023-09-01T03:19:53Z) - CR-Walker: Tree-Structured Graph Reasoning and Dialog Acts for
Conversational Recommendation [62.13413129518165]
CR-Walker is a model that performs tree-structured reasoning on a knowledge graph.
It generates informative dialog acts to guide language generation.
Automatic and human evaluations show that CR-Walker can arrive at more accurate recommendation.
arXiv Detail & Related papers (2020-10-20T14:53:22Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.