Integrating Supervised Extractive and Generative Language Models for Suicide Risk Evidence Summarization
- URL: http://arxiv.org/abs/2403.15478v1
- Date: Wed, 20 Mar 2024 21:16:10 GMT
- Title: Integrating Supervised Extractive and Generative Language Models for Suicide Risk Evidence Summarization
- Authors: Rika Tanaka, Yusuke Fukazawa,
- Abstract summary: We propose a method that integrates supervised extractive and generative language models for providing supporting evidence of suicide risk.
First, we construct a BERT-based model for estimating sentence-level suicide risk and negative sentiment.
Next, we precisely identify high suicide risk sentences by emphasizing elevated probabilities of both suicide risk and negative sentiment.
Finally, we integrate generative summaries using the MentaLLaMa framework and extractive summaries from identified high suicide risk sentences and a specialized dictionary of suicidal risk words.
- Score: 2.0969143947138016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a method that integrates supervised extractive and generative language models for providing supporting evidence of suicide risk in the CLPsych 2024 shared task. Our approach comprises three steps. Initially, we construct a BERT-based model for estimating sentence-level suicide risk and negative sentiment. Next, we precisely identify high suicide risk sentences by emphasizing elevated probabilities of both suicide risk and negative sentiment. Finally, we integrate generative summaries using the MentaLLaMa framework and extractive summaries from identified high suicide risk sentences and a specialized dictionary of suicidal risk words. SophiaADS, our team, achieved 1st place for highlight extraction and ranked 10th for summary generation, both based on recall and consistency metrics, respectively.
Related papers
- Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights [50.89022445197919]
We propose a speech-specific risk taxonomy, covering 8 risk categories under hostility (malicious sarcasm and threats), malicious imitation (age, gender, ethnicity), and stereotypical biases (age, gender, ethnicity)
Based on the taxonomy, we create a small-scale dataset for evaluating current LMMs capability in detecting these categories of risk.
arXiv Detail & Related papers (2024-06-25T10:08:45Z) - Spontaneous Speech-Based Suicide Risk Detection Using Whisper and Large Language Models [5.820498448651539]
This paper studies the automatic detection of suicide risk based on spontaneous speech from adolescents.
The proposed system achieves a detection accuracy of 0.807 and an F1-score of 0.846 on the test set with 119 subjects.
arXiv Detail & Related papers (2024-06-06T09:21:13Z) - Harmful Suicide Content Detection [35.0823928978658]
We introduce a harmful suicide content detection task for classifying online suicide content into five harmfulness levels.
Our contributions include proposing a novel detection task, a multi-modal Korean benchmark with expert annotations, and suggesting strategies using LLMs to detect illegal and harmful content.
arXiv Detail & Related papers (2024-06-03T03:43:44Z) - SOS-1K: A Fine-grained Suicide Risk Classification Dataset for Chinese Social Media Analysis [22.709733830774788]
This study presents a Chinese social media dataset designed for fine-grained suicide risk classification.
Seven pre-trained models were evaluated in two tasks: high and low suicide risk, and fine-grained suicide risk classification on a level of 0 to 10.
Deep learning models show good performance in distinguishing between high and low suicide risk, with the best model achieving an F1 score of 88.39%.
arXiv Detail & Related papers (2024-04-19T06:58:51Z) - Non-Invasive Suicide Risk Prediction Through Speech Analysis [74.8396086718266]
We present a non-invasive, speech-based approach for automatic suicide risk assessment.
We extract three sets of features, including wav2vec, interpretable speech and acoustic features, and deep learning-based spectral representations.
Our most effective speech model achieves a balanced accuracy of $66.2,%$.
arXiv Detail & Related papers (2024-04-18T12:33:57Z) - DPP-Based Adversarial Prompt Searching for Lanugage Models [56.73828162194457]
Auto-regressive Selective Replacement Ascent (ASRA) is a discrete optimization algorithm that selects prompts based on both quality and similarity with determinantal point process (DPP)
Experimental results on six different pre-trained language models demonstrate the efficacy of ASRA for eliciting toxic content.
arXiv Detail & Related papers (2024-03-01T05:28:06Z) - Navigating the OverKill in Large Language Models [84.62340510027042]
We investigate the factors for overkill by exploring how models handle and determine the safety of queries.
Our findings reveal the presence of shortcuts within models, leading to an over-attention of harmful words like 'kill' and prompts emphasizing safety will exacerbate overkill.
We introduce Self-Contrastive Decoding (Self-CD), a training-free and model-agnostic strategy, to alleviate this phenomenon.
arXiv Detail & Related papers (2024-01-31T07:26:47Z) - Constructing Highly Inductive Contexts for Dialogue Safety through
Controllable Reverse Generation [65.48908724440047]
We propose a method called emphreverse generation to construct adversarial contexts conditioned on a given response.
We test three popular pretrained dialogue models (Blender, DialoGPT, and Plato2) and find that BAD+ can largely expose their safety problems.
arXiv Detail & Related papers (2022-12-04T12:23:41Z) - Detecting Suicide Risk in Online Counseling Services: A Study in a
Low-Resource Language [5.2636083103718505]
We propose a model that combines pre-trained language models (PLM) with a fixed set of manually crafted (and clinically approved) set of suicidal cues.
Our model achieves 0.91 ROC-AUC and an F2-score of 0.55, significantly outperforming an array of strong baselines even early on in the conversation.
arXiv Detail & Related papers (2022-09-11T10:06:14Z) - Am I No Good? Towards Detecting Perceived Burdensomeness and Thwarted
Belongingness from Suicide Notes [51.378225388679425]
We present an end-to-end multitask system to address a novel task of detection of Perceived Burdensomeness (PB) and Thwarted Belongingness (TB) from suicide notes.
We also introduce a manually translated code-mixed suicide notes corpus, CoMCEASE-v2.0, based on the benchmark CEASE-v2.0 dataset.
We exploit the temporal orientation and emotion information in the suicide notes to boost overall performance.
arXiv Detail & Related papers (2022-05-20T06:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.