Short-Form Videos and Mental Health: A Knowledge-Guided Neural Topic Model
- URL: http://arxiv.org/abs/2402.10045v4
- Date: Sat, 12 Oct 2024 21:47:33 GMT
- Title: Short-Form Videos and Mental Health: A Knowledge-Guided Neural Topic Model
- Authors: Jiaheng Xie, Ruicheng Liang, Yidong Chai, Yang Liu, Daniel Zeng,
- Abstract summary: We develop a Knowledge-Guided NTM to predict a short-form video's suicidal thought impact on viewers.
Our method also discovers medically relevant topics from videos that are linked to suicidal thought impact.
Our method can help platforms understand videos' suicidal thought impacts, thus moderating videos that violate their community guidelines.
- Score: 7.327234765760251
- License:
- Abstract: Along with the rise of short-form videos, their mental impacts on viewers have led to widespread consequences, prompting platforms to predict videos' impact on viewers' mental health. Subsequently, they can take intervention measures according to their community guidelines. Nevertheless, applicable predictive methods lack relevance to well-established medical knowledge, which outlines clinically proven external and environmental factors of mental disorders. To account for such medical knowledge, we resort to an emergent methodological discipline, seeded Neural Topic Models (NTMs). However, existing seeded NTMs suffer from the limitations of single-origin topics, unknown topic sources, unclear seed supervision, and suboptimal convergence. To address those challenges, we develop a novel Knowledge-Guided NTM to predict a short-form video's suicidal thought impact on viewers. Extensive empirical analyses using TikTok and Douyin datasets prove that our method outperforms state-of-the-art benchmarks. Our method also discovers medically relevant topics from videos that are linked to suicidal thought impact. We contribute to IS with a novel video analytics method that is generalizable to other video classification problems. Practically, our method can help platforms understand videos' suicidal thought impacts, thus moderating videos that violate their community guidelines.
Related papers
- Enhanced Suicidal Ideation Detection from Social Media Using a CNN-BiLSTM Hybrid Model [0.0]
The identification of suicidal ideation in social media text is improved by utilising a hybrid framework.
To enhance the interpretability of the model's predictions, explainable AI (XAI) methods are applied.
The SHAP analysis revealed key features influencing the model's predictions, such as terms related to mental health struggles.
arXiv Detail & Related papers (2025-01-19T16:08:50Z) - Deep Learning-Based Feature Fusion for Emotion Analysis and Suicide Risk Differentiation in Chinese Psychological Support Hotlines [18.81118590515144]
This study introduces a method that combines pitch acoustic features with deep learning-based features to analyze and understand emotions expressed during hotline interactions.
Using data from China's largest psychological support hotline, our method achieved an F1-score of 79.13% for negative binary emotion classification.
Our findings suggest that emotional fluctuation intensity and frequency could serve as novel features for psychological assessment scales and suicide risk prediction.
arXiv Detail & Related papers (2025-01-15T10:09:38Z) - Supporters and Skeptics: LLM-based Analysis of Engagement with Mental Health (Mis)Information Content on Video-sharing Platforms [19.510446994785667]
One in five adults in the US lives with a mental illness.
Short-form video content has grown to serve as a crucial conduit for disseminating mental health help and resources.
arXiv Detail & Related papers (2024-07-02T20:51:06Z) - Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges [64.63744409431001]
We present a comprehensive survey on advances in adversarial attacks and defenses for medical image analysis.
For a fair comparison, we establish a new benchmark for adversarially robust medical diagnosis models.
arXiv Detail & Related papers (2023-03-24T16:38:58Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - An ensemble deep learning technique for detecting suicidal ideation from
posts in social media platforms [0.0]
This paper proposes a LSTM-Attention-CNN combined model to analyze social media submissions to detect suicidal intentions.
The proposed model demonstrated an accuracy of 90.3 percent and an F1-score of 92.6 percent.
arXiv Detail & Related papers (2021-12-17T15:34:03Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Understanding Health Misinformation Transmission: An Interpretable Deep
Learning Approach to Manage Infodemics [6.08461198240039]
This study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD) to predict health misinformation transmission in social media.
We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos.
Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.
arXiv Detail & Related papers (2020-12-21T15:49:19Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.