Leveraging Open Data and Task Augmentation to Automated Behavioral
Coding of Psychotherapy Conversations in Low-Resource Scenarios
- URL: http://arxiv.org/abs/2210.14254v1
- Date: Tue, 25 Oct 2022 18:15:25 GMT
- Title: Leveraging Open Data and Task Augmentation to Automated Behavioral
Coding of Psychotherapy Conversations in Low-Resource Scenarios
- Authors: Zhuohao Chen, Nikolaos Flemotomos, Zac E. Imel, David C. Atkins,
Shrikanth Narayanan
- Abstract summary: In psychotherapy interactions, the quality of a session is assessed by codifying the communicative behaviors of participants during the conversation.
In this paper, we leverage a publicly available conversation-based dataset and transfer knowledge to the low-resource behavioral coding task.
We introduce a task augmentation method to produce a large number of "analogy tasks" - tasks similar to the target one - and demonstrate that the proposed framework predicts target behaviors more accurately than all the other baseline models.
- Score: 35.44178630251169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In psychotherapy interactions, the quality of a session is assessed by
codifying the communicative behaviors of participants during the conversation
through manual observation and annotation. Developing computational approaches
for automated behavioral coding can reduce the burden on human coders and
facilitate the objective evaluation of the intervention. In the real world,
however, implementing such algorithms is associated with data sparsity
challenges since privacy concerns lead to limited available in-domain data. In
this paper, we leverage a publicly available conversation-based dataset and
transfer knowledge to the low-resource behavioral coding task by performing an
intermediate language model training via meta-learning. We introduce a task
augmentation method to produce a large number of "analogy tasks" - tasks
similar to the target one - and demonstrate that the proposed framework
predicts target behaviors more accurately than all the other baseline models.
Related papers
- Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Chain-of-Interaction: Enhancing Large Language Models for Psychiatric Behavior Understanding by Dyadic Contexts [4.403408362362806]
We introduce the Chain-of-Interaction prompting method to contextualize large language models for psychiatric decision support by the dyadic interactions.
This approach enables large language models to leverage the coding scheme, patient state, and domain knowledge for patient behavioral coding.
arXiv Detail & Related papers (2024-03-20T17:47:49Z) - Offline Risk-sensitive RL with Partial Observability to Enhance
Performance in Human-Robot Teaming [1.3980986259786223]
We propose a method to incorporate model uncertainty, thus enabling risk-sensitive sequential decision-making.
Experiments were conducted with a group of twenty-six human participants within a simulated robot teleoperation environment.
arXiv Detail & Related papers (2024-02-08T14:27:34Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z) - Joint Communication and Computation Framework for Goal-Oriented Semantic
Communication with Distortion Rate Resilience [13.36706909571975]
We use the rate-distortion theory to analyze distortions induced by communication and semantic compression.
We can preemptively estimate the empirical accuracy of AI tasks, making the goal-oriented semantic communication problem feasible.
arXiv Detail & Related papers (2023-09-26T00:26:29Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Automated Quality Assessment of Cognitive Behavioral Therapy Sessions
Through Highly Contextualized Language Representations [34.670548892766625]
A BERT-based model is proposed for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT)
The model is trained in a multi-task manner in order to achieve higher interpretability.
BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements.
arXiv Detail & Related papers (2021-02-23T09:22:29Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - Cost-effective Interactive Attention Learning with Neural Attention
Processes [79.8115563067513]
We propose a novel interactive learning framework which we refer to as Interactive Attention Learning (IAL)
IAL is prone to overfitting due to scarcity of human annotations, and requires costly retraining.
We tackle these challenges by proposing a sample-efficient attention mechanism and a cost-effective reranking algorithm for instances and features.
arXiv Detail & Related papers (2020-06-09T17:36:41Z) - Domain-Guided Task Decomposition with Self-Training for Detecting
Personal Events in Social Media [11.638298634523945]
Mining social media for tasks such as detecting personal experiences or events, suffer from lexical sparsity, insufficient training data, and inventive lexicons.
To reduce the burden of creating extensive labeled data, we propose to perform these tasks in two steps: 1.
Decomposing the task into domain-specific sub-tasks by identifying key concepts, thus utilizing human domain understanding; 2. Combining the results of learners for each key concept using co-training to reduce the requirements for labeled training data.
arXiv Detail & Related papers (2020-04-21T14:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.