A Quantitative and Qualitative Analysis of Suicide Ideation Detection
using Deep Learning
- URL: http://arxiv.org/abs/2206.08673v1
- Date: Fri, 17 Jun 2022 10:23:37 GMT
- Title: A Quantitative and Qualitative Analysis of Suicide Ideation Detection
using Deep Learning
- Authors: Siqu Long, Rina Cabral, Josiah Poon, Soyeon Caren Han
- Abstract summary: This paper replicated competitive social media-based suicidality detection/prediction models.
We evaluated the feasibility of detecting suicidal ideation using multiple datasets and different state-of-the-art deep learning models.
- Score: 5.192118773220605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For preventing youth suicide, social media platforms have received much
attention from researchers. A few researches apply machine learning, or deep
learning-based text classification approaches to classify social media posts
containing suicidality risk. This paper replicated competitive social
media-based suicidality detection/prediction models. We evaluated the
feasibility of detecting suicidal ideation using multiple datasets and
different state-of-the-art deep learning models, RNN-, CNN-, and
Attention-based models. Using two suicidality evaluation datasets, we evaluated
28 combinations of 7 input embeddings with 4 commonly used deep learning models
and 5 pretrained language models in quantitative and qualitative ways. Our
replication study confirms that deep learning works well for social media-based
suicidality detection in general, but it highly depends on the dataset's
quality.
Related papers
- Leveraging Large Language Models for Suicide Detection on Social Media with Limited Labels [3.1399304968349186]
This paper explores the use of Large Language Models (LLMs) to automatically detect suicidal content in text-based social media posts.
We develop an ensemble approach involving prompting with Qwen2-72B-Instruct, and using fine-tuned models such as Llama3-8B, Llama3.1-8B, and Gemma2-9B.
Experimental results show that the ensemble model significantly improves the detection accuracy, by 5% points compared with the individual models.
arXiv Detail & Related papers (2024-10-06T14:45:01Z) - SOS-1K: A Fine-grained Suicide Risk Classification Dataset for Chinese Social Media Analysis [22.709733830774788]
This study presents a Chinese social media dataset designed for fine-grained suicide risk classification.
Seven pre-trained models were evaluated in two tasks: high and low suicide risk, and fine-grained suicide risk classification on a level of 0 to 10.
Deep learning models show good performance in distinguishing between high and low suicide risk, with the best model achieving an F1 score of 88.39%.
arXiv Detail & Related papers (2024-04-19T06:58:51Z) - Detecting Suicidality in Arabic Tweets Using Machine Learning and Deep
Learning Techniques [0.32885740436059047]
This study develops an Arabic suicidality detection dataset from Twitter.
It is the first study to develop an Arabic suicidality detection dataset from Twitter.
arXiv Detail & Related papers (2023-09-01T04:30:59Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - An ensemble deep learning technique for detecting suicidal ideation from
posts in social media platforms [0.0]
This paper proposes a LSTM-Attention-CNN combined model to analyze social media submissions to detect suicidal intentions.
The proposed model demonstrated an accuracy of 90.3 percent and an F1-score of 92.6 percent.
arXiv Detail & Related papers (2021-12-17T15:34:03Z) - Detecting Potentially Harmful and Protective Suicide-related Content on
Twitter: A Machine Learning Approach [0.1582078748632554]
We apply machine learning methods to automatically label large quantities of Twitter data.
Two deep learning models achieved the best performance in two classification tasks.
This work enables future large-scale investigations on harmful and protective effects of various kinds of social media content on suicide rates and on help-seeking behavior.
arXiv Detail & Related papers (2021-12-09T09:35:48Z) - Perceptual Score: What Data Modalities Does Your Model Perceive? [73.75255606437808]
We introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features.
We find that recent, more accurate multi-modal models for visual question-answering tend to perceive the visual data less than their predecessors.
Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions.
arXiv Detail & Related papers (2021-10-27T12:19:56Z) - Deep convolutional forest: a dynamic deep ensemble approach for spam
detection in text [219.15486286590016]
This paper introduces a dynamic deep ensemble model for spam detection that adjusts its complexity and extracts features automatically.
As a result, the model achieved high precision, recall, f1-score and accuracy of 98.38%.
arXiv Detail & Related papers (2021-10-10T17:19:37Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - Is Automated Topic Model Evaluation Broken?: The Incoherence of
Coherence [62.826466543958624]
We look at the standardization gap and the validation gap in topic model evaluation.
Recent models relying on neural components surpass classical topic models according to these metrics.
We use automatic coherence along with the two most widely accepted human judgment tasks, namely, topic rating and word intrusion.
arXiv Detail & Related papers (2021-07-05T17:58:52Z) - Can x2vec Save Lives? Integrating Graph and Language Embeddings for
Automatic Mental Health Classification [91.3755431537592]
I show how merging graph and language embedding models (metapath2vec and doc2vec) avoids resource limits.
When integrated, both data produce highly accurate predictions (90%, with 10% false-positives and 12% false-negatives)
These results extend research on the importance of simultaneously analyzing behavior and language in massive networks.
arXiv Detail & Related papers (2020-01-04T20:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.