Towards Automated Psychotherapy via Language Modeling
- URL: http://arxiv.org/abs/2104.10661v1
- Date: Mon, 5 Apr 2021 01:53:39 GMT
- Title: Towards Automated Psychotherapy via Language Modeling
- Authors: Houjun Liu
- Abstract summary: The model was trained upon a mix of the Cornell Movie Dialogue Corpus for language understanding and an open-source, anonymized, and public licensed psychotherapeutic dataset.
The model achieved statistically significant performance in published, standardized qualitative benchmarks against human-written validation data.
Although the model cannot replace the work of psychotherapists entirely, its ability to synthesize human-appearing utterances for the majority of the test set serves as a promising step towards communizing and easing stigma at the psychotherapeutic point-of-care.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this experiment, a model was devised, trained, and evaluated to automate
psychotherapist/client text conversations through the use of state-of-the-art,
Seq2Seq Transformer-based Natural Language Generation (NLG) systems. Through
training the model upon a mix of the Cornell Movie Dialogue Corpus for language
understanding and an open-source, anonymized, and public licensed
psychotherapeutic dataset, the model achieved statistically significant
performance in published, standardized qualitative benchmarks against
human-written validation data - meeting or exceeding human-written responses'
performance in 59.7% and 67.1% of the test set for two independent test methods
respectively. Although the model cannot replace the work of psychotherapists
entirely, its ability to synthesize human-appearing utterances for the majority
of the test set serves as a promising step towards communizing and easing
stigma at the psychotherapeutic point-of-care.
Related papers
- Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales [4.805861461250903]
We show how standard psychological questionnaires can be reformulated into natural language inference prompts.
We demonstrate, using a sample of 88 publicly available models, the existence of human-like mental health-related constructs.
arXiv Detail & Related papers (2024-09-29T11:00:41Z) - Calibrating LLM-Based Evaluator [92.17397504834825]
We propose AutoCalibrate, a multi-stage, gradient-free approach to calibrate and align an LLM-based evaluator toward human preference.
Instead of explicitly modeling human preferences, we first implicitly encompass them within a set of human labels.
Our experiments on multiple text quality evaluation datasets illustrate a significant improvement in correlation with expert evaluation through calibration.
arXiv Detail & Related papers (2023-09-23T08:46:11Z) - Automatically measuring speech fluency in people with aphasia: first
achievements using read-speech data [55.84746218227712]
This study aims at assessing the relevance of a signalprocessingalgorithm, initially developed in the field of language acquisition, for the automatic measurement of speech fluency.
arXiv Detail & Related papers (2023-08-09T07:51:40Z) - NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level
Quality [123.97136358092585]
We develop a TTS system called NaturalSpeech that achieves human-level quality on a benchmark dataset.
Specifically, we leverage a variational autoencoder (VAE) for end-to-end text to waveform generation.
Experiment evaluations on popular LJSpeech dataset show that our proposed NaturalSpeech achieves -0.01 CMOS to human recordings at the sentence level.
arXiv Detail & Related papers (2022-05-09T16:57:35Z) - The state-of-the-art in text-based automatic personality prediction [1.3209941988151326]
Personality detection is an old topic in psychology and Automatic Personality Prediction (or Perception) (APP)
APP is the automated (computationally) forecasting of the personality on different types of human generated/exchanged contents (such as text, speech, image, video)
arXiv Detail & Related papers (2021-10-04T04:51:11Z) - An Evaluation of Generative Pre-Training Model-based Therapy Chatbot for
Caregivers [5.2116528363639985]
Generative-based approaches, such as the OpenAI GPT models, could allow for more dynamic conversations in therapy contexts.
We built a chatbots using the GPT-2 model and fine-tuned it with 306 therapy session transcripts between family caregivers of individuals with dementia and therapists conducting Problem Solving Therapy.
Results showed that the fine-tuned model created more non-word outputs than the pre-trained model.
arXiv Detail & Related papers (2021-07-28T01:01:08Z) - TextFlint: Unified Multilingual Robustness Evaluation Toolkit for
Natural Language Processing [73.16475763422446]
We propose a multilingual robustness evaluation platform for NLP tasks (TextFlint)
It incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analysis.
TextFlint generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model's robustness.
arXiv Detail & Related papers (2021-03-21T17:20:38Z) - Automated Quality Assessment of Cognitive Behavioral Therapy Sessions
Through Highly Contextualized Language Representations [34.670548892766625]
A BERT-based model is proposed for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT)
The model is trained in a multi-task manner in order to achieve higher interpretability.
BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements.
arXiv Detail & Related papers (2021-02-23T09:22:29Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z) - Automating Text Naturalness Evaluation of NLG Systems [0.0]
We present an attempt to automate the evaluation of text naturalness.
Instead of relying on human participants for scoring or labeling the text samples, we propose to automate the process.
We analyze the text probability fractions and observe how they are influenced by the size of the generative and discriminative models involved in the process.
arXiv Detail & Related papers (2020-06-23T18:48:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.