Leveraging Contextual Relatedness to Identify Suicide Documentation in
Clinical Notes through Zero Shot Learning
- URL: http://arxiv.org/abs/2301.03531v1
- Date: Mon, 9 Jan 2023 17:26:07 GMT
- Title: Leveraging Contextual Relatedness to Identify Suicide Documentation in
Clinical Notes through Zero Shot Learning
- Authors: Terri Elizabeth Workman, Joseph L. Goulet, Cynthia A. Brandt, Allison
R. Warren, Jacob Eleazer, Melissa Skanderson, Luke Lindemann, John R.
Blosnich, John O Leary, Qing Zeng Treitler
- Abstract summary: This paper describes a novel methodology that identifies suicidality in clinical notes by addressing this data sparsity issue through zero-shot learning.
A deep neural network was trained by mapping the training documents contents to a semantic space.
In applying a 0.90 probability threshold, the methodology identified notes not associated with a relevant ICD 10 CM code that documented suicidality, with 94 percent accuracy.
- Score: 8.57098973963918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying suicidality including suicidal ideation, attempts, and risk
factors in electronic health record data in clinical notes is difficult. A
major difficulty is the lack of training samples given the small number of true
positive instances among the increasingly large number of patients being
screened. This paper describes a novel methodology that identifies suicidality
in clinical notes by addressing this data sparsity issue through zero-shot
learning. U.S. Veterans Affairs clinical notes served as data. The training
dataset label was determined using diagnostic codes of suicide attempt and
self-harm. A base string associated with the target label of suicidality was
used to provide auxiliary information by narrowing the positive training cases
to those containing the base string. A deep neural network was trained by
mapping the training documents contents to a semantic space. For comparison, we
trained another deep neural network using the identical training dataset labels
and bag-of-words features. The zero shot learning model outperformed the
baseline model in terms of AUC, sensitivity, specificity, and positive
predictive value at multiple probability thresholds. In applying a 0.90
probability threshold, the methodology identified notes not associated with a
relevant ICD 10 CM code that documented suicidality, with 94 percent accuracy.
This new method can effectively identify suicidality without requiring manual
annotation.
Related papers
- Suicide Phenotyping from Clinical Notes in Safety-Net Psychiatric Hospital Using Multi-Label Classification with Pre-Trained Language Models [10.384299115679369]
Pre-trained language models offer promise for identifying suicidality from unstructured clinical narratives.
We evaluated the performance of four BERT-based models using two fine-tuning strategies.
The findings highlight that the model optimization, pretraining with domain-relevant data, and the single multi-label classification strategy enhance the model performance of suicide phenotyping.
arXiv Detail & Related papers (2024-09-27T16:13:38Z) - Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - SOS-1K: A Fine-grained Suicide Risk Classification Dataset for Chinese Social Media Analysis [22.709733830774788]
This study presents a Chinese social media dataset designed for fine-grained suicide risk classification.
Seven pre-trained models were evaluated in two tasks: high and low suicide risk, and fine-grained suicide risk classification on a level of 0 to 10.
Deep learning models show good performance in distinguishing between high and low suicide risk, with the best model achieving an F1 score of 88.39%.
arXiv Detail & Related papers (2024-04-19T06:58:51Z) - Non-Invasive Suicide Risk Prediction Through Speech Analysis [74.8396086718266]
We present a non-invasive, speech-based approach for automatic suicide risk assessment.
We extract three sets of features, including wav2vec, interpretable speech and acoustic features, and deep learning-based spectral representations.
Our most effective speech model achieves a balanced accuracy of $66.2,%$.
arXiv Detail & Related papers (2024-04-18T12:33:57Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Contrastive Deep Encoding Enables Uncertainty-aware
Machine-learning-assisted Histopathology [6.548275341067594]
terabytes of training data can be consciously utilized to pre-train deep networks to encode informative representations.
We show that our approach can reach the state-of-the-art (SOTA) for patch-level classification with only 1-10% randomly selected annotations.
arXiv Detail & Related papers (2023-09-13T17:37:19Z) - Transductive Linear Probing: A Novel Framework for Few-Shot Node
Classification [56.17097897754628]
We show that transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol.
We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
arXiv Detail & Related papers (2022-12-11T21:10:34Z) - An ensemble deep learning technique for detecting suicidal ideation from
posts in social media platforms [0.0]
This paper proposes a LSTM-Attention-CNN combined model to analyze social media submissions to detect suicidal intentions.
The proposed model demonstrated an accuracy of 90.3 percent and an F1-score of 92.6 percent.
arXiv Detail & Related papers (2021-12-17T15:34:03Z) - Deep Learning for Suicide and Depression Identification with
Unsupervised Label Correction [0.0]
Early detection of suicidal ideation in depressed individuals can allow for adequate medical attention and support.
Recent NLP research focuses on classifying, from a given piece of text, if an individual is suicidal or clinically healthy.
We propose SDCNL, a suicide versus classification method through a deep learning approach.
arXiv Detail & Related papers (2021-02-18T15:40:07Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.