Using Large Language Models to Create Personalized Networks From Therapy Sessions
- URL: http://arxiv.org/abs/2512.05836v1
- Date: Fri, 05 Dec 2025 16:12:12 GMT
- Title: Using Large Language Models to Create Personalized Networks From Therapy Sessions
- Authors: Clarissa W. Ong, Hiba Arnaout, Kate Sheehan, Estella Fox, Eugen Owtscharow, Iryna Gurevych,
- Abstract summary: We present an end-to-end pipeline for automatically generating client networks from 77 therapy transcripts.<n>We applied in-context learning to jointly identify psychological processes and their dimensions.<n>Experts found that networks produced by our multi-step approach outperformed those built with direct prompting.
- Score: 37.49333022472426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in psychotherapy have focused on treatment personalization, such as by selecting treatment modules based on personalized networks. However, estimating personalized networks typically requires intensive longitudinal data, which is not always feasible. A solution to facilitate scalability of network-driven treatment personalization is leveraging LLMs. In this study, we present an end-to-end pipeline for automatically generating client networks from 77 therapy transcripts to support case conceptualization and treatment planning. We annotated 3364 psychological processes and their corresponding dimensions in therapy transcripts. Using these data, we applied in-context learning to jointly identify psychological processes and their dimensions. The method achieved high performance even with a few training examples. To organize the processes into networks, we introduced a two-step method that grouped them into clinically meaningful clusters. We then generated explanation-augmented relationships between clusters. Experts found that networks produced by our multi-step approach outperformed those built with direct prompting for clinical utility and interpretability, with up to 90% preferring our approach. In addition, the networks were rated favorably by experts, with scores for clinical relevance, novelty, and usefulness ranging from 72-75%. Our findings provide a proof of concept for using LLMs to create clinically relevant networks from therapy transcripts. Advantages of our approach include bottom-up case conceptualization from client utterances in therapy sessions and identification of latent themes. Networks generated from our pipeline may be used in clinical settings and supervision and training. Future research should examine whether these networks improve treatment outcomes relative to other methods of treatment personalization, including statistically estimated networks.
Related papers
- Prior-informed optimization of treatment recommendation via bandit algorithms trained on large language model-processed historical records [0.6875312133832079]
Current medical practice depends on standardized treatment frameworks and empirical methodologies that neglect individual patient variations.<n>We develop a comprehensive system integrating Large Language Models (LLMs), Conditional Tabular Generative Adversarial Networks (CTGAN), T-learner counterfactual models, and contextual bandit approaches.
arXiv Detail & Related papers (2025-10-21T18:57:00Z) - Learning to Route: Per-Sample Adaptive Routing for Multimodal Multitask Prediction [4.171905792428217]
We introduce a routing-based architecture that dynamically selects modality processing pathways and task-sharing strategies on a per-sample basis.<n>Our model defines multiple modality paths, including raw and fused representations of text and numeric features.<n>We evaluate the model on both synthetic data and real-world psychotherapy notes predicting depression and anxiety outcomes.
arXiv Detail & Related papers (2025-09-06T16:49:45Z) - Beyond Empathy: Integrating Diagnostic and Therapeutic Reasoning with Large Language Models for Mental Health Counseling [50.83055329849865]
PsyLLM is a large language model designed to integrate diagnostic and therapeutic reasoning for mental health counseling.<n>It processes real-world mental health posts from Reddit and generates multi-turn dialogue structures.<n>Our experiments demonstrate that PsyLLM significantly outperforms state-of-the-art baseline models.
arXiv Detail & Related papers (2025-05-21T16:24:49Z) - Prompt-based Personalized Federated Learning for Medical Visual Question
Answering [56.002377299811656]
We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns.
We regard medical datasets from different organs as clients and use pFL to train personalized transformer-based VQA models for each client.
arXiv Detail & Related papers (2024-02-15T03:09:54Z) - Ensembling Neural Networks for Improved Prediction and Privacy in Early
Diagnosis of Sepsis [13.121103500410156]
Ensembling neural networks is a technique for improving the generalization error of neural networks.
We show that this technique is an ideal fit for machine learning on medical data.
We show that one can build an ensemble of a few selected patient-specific models that outperforms a single model trained on much larger pooled datasets.
arXiv Detail & Related papers (2022-09-01T13:24:14Z) - Adherence Forecasting for Guided Internet-Delivered Cognitive Behavioral
Therapy: A Minimally Data-Sensitive Approach [59.535699822923]
Internet-delivered psychological treatments (IDPT) are seen as an effective and scalable pathway to improving the accessibility of mental healthcare.
This work proposes a deep-learning approach to perform automatic adherence forecasting, while relying on minimally sensitive login/logout data.
The proposed Self-Attention Network achieved over 70% average balanced accuracy, when only 1/3 of the treatment duration had elapsed.
arXiv Detail & Related papers (2022-01-11T13:55:57Z) - Integrating Neural Networks and Dictionary Learning for Multidimensional Clinical Characterizations from Functional Connectomics Data [3.276067241408604]
We propose a unified framework that combines neural networks with dictionary learning to model complex interactions between resting state functional MRI and behavioral data.
We evaluate our combined model on a multi-score prediction task using 52 patients diagnosed with Autism Spectrum Disorder (ASD)
Our integrated framework outperforms state-of-the-art methods in a ten-fold cross validated setting to predict three different measures of clinical severity.
arXiv Detail & Related papers (2020-07-03T20:14:45Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.