Reliability Analysis of Psychological Concept Extraction and
Classification in User-penned Text
- URL: http://arxiv.org/abs/2401.06709v1
- Date: Fri, 12 Jan 2024 17:19:14 GMT
- Title: Reliability Analysis of Psychological Concept Extraction and
Classification in User-penned Text
- Authors: Muskan Garg, MSVPJ Sathvik, Amrit Chadha, Shaina Raza, Sunghwan Sohn
- Abstract summary: We use the LoST dataset to capture nuanced textual cues that suggest the presence of low self-esteem in the posts of Reddit users.
Our findings suggest the need of shifting the focus of PLMs from Trigger and Consequences to a more comprehensive explanation.
- Score: 9.26840677406494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The social NLP research community witness a recent surge in the computational
advancements of mental health analysis to build responsible AI models for a
complex interplay between language use and self-perception. Such responsible AI
models aid in quantifying the psychological concepts from user-penned texts on
social media. On thinking beyond the low-level (classification) task, we
advance the existing binary classification dataset, towards a higher-level task
of reliability analysis through the lens of explanations, posing it as one of
the safety measures. We annotate the LoST dataset to capture nuanced textual
cues that suggest the presence of low self-esteem in the posts of Reddit users.
We further state that the NLP models developed for determining the presence of
low self-esteem, focus more on three types of textual cues: (i) Trigger: words
that triggers mental disturbance, (ii) LoST indicators: text indicators
emphasizing low self-esteem, and (iii) Consequences: words describing the
consequences of mental disturbance. We implement existing classifiers to
examine the attention mechanism in pre-trained language models (PLMs) for a
domain-specific psychology-grounded task. Our findings suggest the need of
shifting the focus of PLMs from Trigger and Consequences to a more
comprehensive explanation, emphasizing LoST indicators while determining low
self-esteem in Reddit posts.
Related papers
- Enhancing Depression Detection with Chain-of-Thought Prompting: From Emotion to Reasoning Using Large Language Models [9.43184936918456]
Depression is one of the leading causes of disability worldwide.
Recent advancements in Large Language Models have shown promise in addressing mental health challenges.
We propose a Chain-of-Thought Prompting approach that enhances both the performance and interpretability of depression detection.
arXiv Detail & Related papers (2025-02-09T12:30:57Z) - LlaMADRS: Prompting Large Language Models for Interview-Based Depression Assessment [75.44934940580112]
This study introduces LlaMADRS, a novel framework leveraging open-source Large Language Models (LLMs) to automate depression severity assessment.
We employ a zero-shot prompting strategy with carefully designed cues to guide the model in interpreting and scoring transcribed clinical interviews.
Our approach, tested on 236 real-world interviews, demonstrates strong correlations with clinician assessments.
arXiv Detail & Related papers (2025-01-07T08:49:04Z) - Detecting anxiety and depression in dialogues: a multi-label and explainable approach [5.635300481123079]
Anxiety and depression are the most common mental health issues worldwide, affecting a non-negligible part of the population.
In this work, an entirely novel system for the multi-label classification of anxiety and depression is proposed.
arXiv Detail & Related papers (2024-12-23T15:29:46Z) - Decoding Linguistic Nuances in Mental Health Text Classification Using Expressive Narrative Stories [5.091061468748012]
This study bridges the gap by focusing on Expressive Narrative Stories (ENS) from individuals with and without self-declared depression.
Our research evaluates the utility of advanced language models, BERT and MentalBERT, against traditional models.
BERT exhibited minimal sensitivity to the absence of topic words in ENS, suggesting its superior capability to understand deeper linguistic features.
arXiv Detail & Related papers (2024-12-20T19:29:21Z) - Explore the Hallucination on Low-level Perception for MLLMs [83.12180878559295]
We aim to define and evaluate the self-awareness of MLLMs in low-level visual perception and understanding tasks.
We present QL-Bench, a benchmark settings to simulate human responses to low-level vision.
We demonstrate that while some models exhibit robust low-level visual capabilities, their self-awareness remains relatively underdeveloped.
arXiv Detail & Related papers (2024-09-15T14:38:29Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - A Simple and Flexible Modeling for Mental Disorder Detection by Learning
from Clinical Questionnaires [0.2580765958706853]
We propose a novel approach that captures the semantic meanings directly from the text and compares them to symptom-related descriptions.
Our detailed analysis shows that the proposed model is effective at leveraging domain knowledge, transferable to other mental disorders, and providing interpretable detection results.
arXiv Detail & Related papers (2023-06-05T15:23:55Z) - An Annotated Dataset for Explainable Interpersonal Risk Factors of
Mental Disturbance in Social Media Posts [0.0]
We construct and release a new annotated dataset with human-labelled explanations and classification of Interpersonal Risk Factors (IRF) affecting mental disturbance on social media.
We establish baseline models on our dataset facilitating future research directions to develop real-time personalized AI models by detecting patterns of TBe and PBu in emotional spectrum of user's historical social media profile.
arXiv Detail & Related papers (2023-05-30T04:08:40Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.