Reliability Analysis of Psychological Concept Extraction and
Classification in User-penned Text
- URL: http://arxiv.org/abs/2401.06709v1
- Date: Fri, 12 Jan 2024 17:19:14 GMT
- Title: Reliability Analysis of Psychological Concept Extraction and
Classification in User-penned Text
- Authors: Muskan Garg, MSVPJ Sathvik, Amrit Chadha, Shaina Raza, Sunghwan Sohn
- Abstract summary: We use the LoST dataset to capture nuanced textual cues that suggest the presence of low self-esteem in the posts of Reddit users.
Our findings suggest the need of shifting the focus of PLMs from Trigger and Consequences to a more comprehensive explanation.
- Score: 9.26840677406494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The social NLP research community witness a recent surge in the computational
advancements of mental health analysis to build responsible AI models for a
complex interplay between language use and self-perception. Such responsible AI
models aid in quantifying the psychological concepts from user-penned texts on
social media. On thinking beyond the low-level (classification) task, we
advance the existing binary classification dataset, towards a higher-level task
of reliability analysis through the lens of explanations, posing it as one of
the safety measures. We annotate the LoST dataset to capture nuanced textual
cues that suggest the presence of low self-esteem in the posts of Reddit users.
We further state that the NLP models developed for determining the presence of
low self-esteem, focus more on three types of textual cues: (i) Trigger: words
that triggers mental disturbance, (ii) LoST indicators: text indicators
emphasizing low self-esteem, and (iii) Consequences: words describing the
consequences of mental disturbance. We implement existing classifiers to
examine the attention mechanism in pre-trained language models (PLMs) for a
domain-specific psychology-grounded task. Our findings suggest the need of
shifting the focus of PLMs from Trigger and Consequences to a more
comprehensive explanation, emphasizing LoST indicators while determining low
self-esteem in Reddit posts.
Related papers
- Explore the Hallucination on Low-level Perception for MLLMs [83.12180878559295]
We aim to define and evaluate the self-awareness of MLLMs in low-level visual perception and understanding tasks.
We present QL-Bench, a benchmark settings to simulate human responses to low-level vision.
We demonstrate that while some models exhibit robust low-level visual capabilities, their self-awareness remains relatively underdeveloped.
arXiv Detail & Related papers (2024-09-15T14:38:29Z) - Exploring the Task-agnostic Trait of Self-supervised Learning in the Context of Detecting Mental Disorders [2.314534951601432]
Self-supervised learning (SSL) has been investigated to generate task-agnostic representations across various domains.
This study employs SSL models trained by predicting multiple fixed targets or masked frames.
We propose a list of fixed targets to make the generated representation more efficient for detecting MDD and PTSD.
arXiv Detail & Related papers (2024-03-22T12:46:58Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - Interpreting Context Look-ups in Transformers: Investigating Attention-MLP Interactions [19.33740818235595]
This study investigates how attention heads and next-token neurons interact in large language models (LLMs) to predict new words.
Our findings reveal that some attention heads recognize specific contexts and activate a token-predicting neuron accordingly.
arXiv Detail & Related papers (2024-02-23T02:15:47Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Explainable Depression Symptom Detection in Social Media [2.677715367737641]
We propose using transformer-based architectures to detect and explain the appearance of depressive symptom markers in the users' writings.
Our natural language explanations enable clinicians to interpret the models' decisions based on validated symptoms.
arXiv Detail & Related papers (2023-10-20T17:05:27Z) - A Simple and Flexible Modeling for Mental Disorder Detection by Learning
from Clinical Questionnaires [0.2580765958706853]
We propose a novel approach that captures the semantic meanings directly from the text and compares them to symptom-related descriptions.
Our detailed analysis shows that the proposed model is effective at leveraging domain knowledge, transferable to other mental disorders, and providing interpretable detection results.
arXiv Detail & Related papers (2023-06-05T15:23:55Z) - An Annotated Dataset for Explainable Interpersonal Risk Factors of
Mental Disturbance in Social Media Posts [0.0]
We construct and release a new annotated dataset with human-labelled explanations and classification of Interpersonal Risk Factors (IRF) affecting mental disturbance on social media.
We establish baseline models on our dataset facilitating future research directions to develop real-time personalized AI models by detecting patterns of TBe and PBu in emotional spectrum of user's historical social media profile.
arXiv Detail & Related papers (2023-05-30T04:08:40Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.