CATCH: A Novel Data Synthesis Framework for High Therapy Fidelity and Memory-Driven Planning Chain of Thought in AI Counseling
- URL: http://arxiv.org/abs/2509.25733v1
- Date: Tue, 30 Sep 2025 03:44:00 GMT
- Title: CATCH: A Novel Data Synthesis Framework for High Therapy Fidelity and Memory-Driven Planning Chain of Thought in AI Counseling
- Authors: Mingyu Chen, Jingkai Lin, Zhaojie Chu, Xiaofen Xing, Yirong Chen, Xiangmin Xu,
- Abstract summary: CATCH is a novel data synthesis framework designed to address these challenges.<n>To improve therapy fidelity, we introduce the Progressive Dialogue Synthesis strategy.<n>To capture decision-making rationale behind each response, we propose the Memory-Driven Thinking pattern.
- Score: 33.31877691538151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, advancements in AI counseling based on large language models have shown significant progress. However, existing studies employ a one-time generation approach to synthesize multi-turn dialogue samples, resulting in low therapy fidelity and failing to capture the decision-making rationale behind each response. In this work, we propose CATCH, a novel data synthesis framework designed to address these challenges. Specifically, to improve therapy fidelity, we introduce the Progressive Dialogue Synthesis strategy, which extracts goals, resources, and solutions from a client's self-report, organizes them into structured outlines, and then incrementally generates stage-aligned counseling dialogues. To capture decision-making rationale behind each response, we propose the Memory-Driven Dynamic Planning thinking pattern that integrates memory enhancement, global planning, and strategy reasoning; a collaborative multi-agent optimizer then leverages MDP to attach explicit chain-of-thought to each dialogue turn. Extensive experiments and human evaluations demonstrate that CATCH significantly enhances fidelity and logical coherence in AI counseling.
Related papers
- PRINCIPLES: Synthetic Strategy Memory for Proactive Dialogue Agents [16.819463022406627]
We propose PRINCIPLES: a synthetic strategy memory for proactive dialogue agents.<n> PRINCIPLES is derived through offline self-play simulations and serves as reusable knowledge that guides strategy planning.<n>We evaluate PRINCIPLES in both emotional support and persuasion domains, demonstrating consistent improvements over strong baselines.
arXiv Detail & Related papers (2025-09-22T07:53:59Z) - Crisp: Cognitive Restructuring of Negative Thoughts through Multi-turn Supportive Dialogues [75.16593367473259]
Cognitive Restructuring (CR) is a psychotherapeutic process aimed at identifying and restructuring an individual's negative thoughts.<n>Existing efforts implement CR via simple text rewriting, fixed-pattern dialogues, or a one-shot CR workflow.<n>We propose CRDial, a novel framework for CR, which creates multi-turn dialogues with specifically designed identification and restructuring stages.
arXiv Detail & Related papers (2025-04-24T04:22:00Z) - Meta-Reasoner: Dynamic Guidance for Optimized Inference-time Reasoning in Large Language Models [35.82665698868508]
Large Language Models (LLMs) struggle with high computational time and error propagation during inference time.<n>We propose Meta-Reasoner, a new framework to enable LLMs to optimize the inference compute by adjusting strategies on how to reason during inference time.<n>Our method improves performance by 9-12% over previous SOTA methods while reducing inference time by 28-35%.
arXiv Detail & Related papers (2025-02-27T09:40:13Z) - Mutual Reinforcement of LLM Dialogue Synthesis and Summarization Capabilities for Few-Shot Dialogue Summarization [28.989849099599667]
Mutual Reinforcing Data Synthesis (MRDS) within LLMs to improve few-shot dialogue summarization task.<n>By leveraging the proposed MRDS mechanism, we elicit the internal knowledge of LLM in the format of synthetic data.<n>Our method attains the highest average scores in human evaluations.
arXiv Detail & Related papers (2025-02-24T17:01:48Z) - Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an "AI Therapist" [0.0]
Large Language Model (LLM)-Powered Conversational Agents have the potential to provide users with scaled behavioral healthcare support.<n>We introduce a novel paradigm for dialog policy planning in conversational agents enabling them to act according to an expert-written "script"<n>We implement two variants of Script-Based Dialog Policy Planning using different prompting techniques and synthesize a total of 100 conversations with LLM-simulated patients.
arXiv Detail & Related papers (2024-12-13T12:12:47Z) - Planning Like Human: A Dual-process Framework for Dialogue Planning [31.995557540062553]
We propose the Dual-Process Dialogue Planning framework to enhance dialogue planning in Large Language Models (LLMs)
Inspired by the dualprocess theory in psychology, we propose the framework, which embodies two modes of thinking: intuitive (fast) and analytical (slow)
Our empirical evaluations affirm DPDP's superiority in achieving both high-quality dialogues and operational efficiency, outpacing existing methods.
arXiv Detail & Related papers (2024-06-08T06:52:47Z) - Plug-and-Play Policy Planner for Large Language Model Powered Dialogue
Agents [121.46051697742608]
We introduce a new dialogue policy planning paradigm to strategize dialogue problems with a tunable language model plug-in named PPDPP.
Specifically, we develop a novel training framework to facilitate supervised fine-tuning over available human-annotated data.
PPDPP consistently and substantially outperforms existing approaches on three different proactive dialogue applications.
arXiv Detail & Related papers (2023-11-01T03:20:16Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - EM Pre-training for Multi-party Dialogue Response Generation [86.25289241604199]
In multi-party dialogues, the addressee of a response utterance should be specified before it is generated.
We propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels.
arXiv Detail & Related papers (2023-05-21T09:22:41Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.