Detecting Suicide Risk in Online Counseling Services: A Study in a
Low-Resource Language
- URL: http://arxiv.org/abs/2209.04830v1
- Date: Sun, 11 Sep 2022 10:06:14 GMT
- Title: Detecting Suicide Risk in Online Counseling Services: A Study in a
Low-Resource Language
- Authors: Amir Bialer and Daniel Izmaylov and Avi Segal and Oren Tsur and Yossi
Levi-Belz and Kobi Gal
- Abstract summary: We propose a model that combines pre-trained language models (PLM) with a fixed set of manually crafted (and clinically approved) set of suicidal cues.
Our model achieves 0.91 ROC-AUC and an F2-score of 0.55, significantly outperforming an array of strong baselines even early on in the conversation.
- Score: 5.2636083103718505
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the increased awareness of situations of mental crisis and their
societal impact, online services providing emergency support are becoming
commonplace in many countries. Computational models, trained on discussions
between help-seekers and providers, can support suicide prevention by
identifying at-risk individuals. However, the lack of domain-specific models,
especially in low-resource languages, poses a significant challenge for the
automatic detection of suicide risk. We propose a model that combines
pre-trained language models (PLM) with a fixed set of manually crafted (and
clinically approved) set of suicidal cues, followed by a two-stage fine-tuning
process. Our model achieves 0.91 ROC-AUC and an F2-score of 0.55, significantly
outperforming an array of strong baselines even early on in the conversation,
which is critical for real-time detection in the field. Moreover, the model
performs well across genders and age groups.
Related papers
- Deep Learning and Large Language Models for Audio and Text Analysis in Predicting Suicidal Acts in Chinese Psychological Support Hotlines [13.59130559079134]
Approximately two million people in China attempt suicide annually, with many individuals making multiple attempts.
Deep learning models and large-scale language models (LLMs) have been introduced to the field of mental health.
This study included 1284 subjects, and was designed to validate whether deep learning models and LLMs, using audio and transcribed text from support hotlines, can effectively predict suicide risk.
arXiv Detail & Related papers (2024-09-10T02:22:50Z) - Exploring Gender-Specific Speech Patterns in Automatic Suicide Risk Assessment [39.26231968260796]
This study involves a novel dataset comprising speech recordings of 20 patients who read neutral texts.
We extract four speech representations encompassing interpretable and deep features.
By applying gender-exclusive modelling, features extracted from an emotion fine-tuned wav2vec2.0 model can be utilised to discriminate high- from low- suicide risk with a balanced accuracy of 81%.
arXiv Detail & Related papers (2024-06-26T12:51:28Z) - Spontaneous Speech-Based Suicide Risk Detection Using Whisper and Large Language Models [5.820498448651539]
This paper studies the automatic detection of suicide risk based on spontaneous speech from adolescents.
The proposed system achieves a detection accuracy of 0.807 and an F1-score of 0.846 on the test set with 119 subjects.
arXiv Detail & Related papers (2024-06-06T09:21:13Z) - SOS-1K: A Fine-grained Suicide Risk Classification Dataset for Chinese Social Media Analysis [22.709733830774788]
This study presents a Chinese social media dataset designed for fine-grained suicide risk classification.
Seven pre-trained models were evaluated in two tasks: high and low suicide risk, and fine-grained suicide risk classification on a level of 0 to 10.
Deep learning models show good performance in distinguishing between high and low suicide risk, with the best model achieving an F1 score of 88.39%.
arXiv Detail & Related papers (2024-04-19T06:58:51Z) - Non-Invasive Suicide Risk Prediction Through Speech Analysis [74.8396086718266]
We present a non-invasive, speech-based approach for automatic suicide risk assessment.
We extract three sets of features, including wav2vec, interpretable speech and acoustic features, and deep learning-based spectral representations.
Our most effective speech model achieves a balanced accuracy of $66.2,%$.
arXiv Detail & Related papers (2024-04-18T12:33:57Z) - DPP-Based Adversarial Prompt Searching for Lanugage Models [56.73828162194457]
Auto-regressive Selective Replacement Ascent (ASRA) is a discrete optimization algorithm that selects prompts based on both quality and similarity with determinantal point process (DPP)
Experimental results on six different pre-trained language models demonstrate the efficacy of ASRA for eliciting toxic content.
arXiv Detail & Related papers (2024-03-01T05:28:06Z) - Locally Differentially Private Document Generation Using Zero Shot
Prompting [61.20953109732442]
We propose a locally differentially private mechanism called DP-Prompt to counter author de-anonymization attacks.
When DP-Prompt is used with a powerful language model like ChatGPT (gpt-3.5), we observe a notable reduction in the success rate of de-anonymization attacks.
arXiv Detail & Related papers (2023-10-24T18:25:13Z) - LaMDA: Language Models for Dialog Applications [75.75051929981933]
LaMDA is a family of Transformer-based neural language models specialized for dialog.
Fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements.
arXiv Detail & Related papers (2022-01-20T15:44:37Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.