Video-Mediated Emotion Disclosure: Expressions of Fear, Sadness, and Joy by People with Schizophrenia on YouTube
- URL: http://arxiv.org/abs/2506.10932v2
- Date: Wed, 18 Jun 2025 22:19:28 GMT
- Title: Video-Mediated Emotion Disclosure: Expressions of Fear, Sadness, and Joy by People with Schizophrenia on YouTube
- Authors: Jiaying Lizzy Liu, Yan Zhang,
- Abstract summary: We analyzed 200 YouTube videos created by individuals with schizophrenia.<n>Our analysis revealed diverse practices of emotion disclosure through both verbal and visual channels.<n>We found that the deliberate construction of visual elements, including environmental settings, appears to foster more supportive and engaged viewer responses.
- Score: 2.767257448554864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individuals with schizophrenia frequently experience intense emotions and often turn to vlogging as a medium for emotional expression. While previous research has predominantly focused on text based disclosure, little is known about how individuals construct narratives around emotions and emotional experiences in video blogs. Our study addresses this gap by analyzing 200 YouTube videos created by individuals with schizophrenia. Drawing on media research and self presentation theories, we developed a visual analysis framework to disentangle these videos. Our analysis revealed diverse practices of emotion disclosure through both verbal and visual channels, highlighting the dynamic interplay between these modes of expression. We found that the deliberate construction of visual elements, including environmental settings and specific aesthetic choices, appears to foster more supportive and engaged viewer responses. These findings underscore the need for future large scale quantitative research examining how visual features shape video mediated communication on social media platforms. Such investigations would inform the development of care centered video sharing platforms that better support individuals managing illness experiences.
Related papers
- Saliency-guided Emotion Modeling: Predicting Viewer Reactions from Video Stimuli [0.0]
We introduce a novel saliency-based approach to emotion prediction by extracting two key features: saliency area and number of salient regions.<n>Using the HD2S saliency model and OpenFace facial action unit analysis, we examine the relationship between video saliency and viewer emotions.
arXiv Detail & Related papers (2025-05-25T14:52:36Z) - Displaying Fear, Sadness, and Joy in Public: Schizophrenia Vloggers' Video Narration of Emotion and Online Care-Seeking [8.640838598568605]
Individuals with severe mental illnesses (SMI) increasingly turn to vlogging as an authentic medium for emotional disclosure and online support-seeking.<n>Our study analyzed 401 YouTube videos created by schizophrenia vloggers, revealing that vloggers disclosed their fear, sadness, and joy through verbal narration by explicit expressions or storytelling.<n> Notably, we uncovered a concerning 'visual appeal disparity' in audience engagement, with visually appealing videos receiving significantly more views, likes, and comments.
arXiv Detail & Related papers (2025-02-28T02:23:27Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - Affection: Learning Affective Explanations for Real-World Visual Data [50.28825017427716]
We introduce and share with the research community a large-scale dataset that contains emotional reactions and free-form textual explanations for 85,007 publicly available images.
We show that there is significant common ground to capture potentially plausible emotional responses with a large support in the subject population.
Our work paves the way for richer, more human-centric, and emotionally-aware image analysis systems.
arXiv Detail & Related papers (2022-10-04T22:44:17Z) - Predicting emotion from music videos: exploring the relative
contribution of visual and auditory information to affective responses [0.0]
We present MusicVideos (MuVi), a novel dataset for affective multimedia content analysis.
The data were collected by presenting music videos to participants in three conditions: music, visual, and audiovisual.
arXiv Detail & Related papers (2022-02-19T07:36:43Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Affective Image Content Analysis: Two Decades Review and New
Perspectives [132.889649256384]
We will comprehensively review the development of affective image content analysis (AICA) in the recent two decades.
We will focus on the state-of-the-art methods with respect to three main challenges -- the affective gap, perception subjectivity, and label noise and absence.
We discuss some challenges and promising research directions in the future, such as image content and context understanding, group emotion clustering, and viewer-image interaction.
arXiv Detail & Related papers (2021-06-30T15:20:56Z) - Audio-Driven Emotional Video Portraits [79.95687903497354]
We present Emotional Video Portraits (EVP), a system for synthesizing high-quality video portraits with vivid emotional dynamics driven by audios.
Specifically, we propose the Cross-Reconstructed Emotion Disentanglement technique to decompose speech into two decoupled spaces.
With the disentangled features, dynamic 2D emotional facial landmarks can be deduced.
Then we propose the Target-Adaptive Face Synthesis technique to generate the final high-quality video portraits.
arXiv Detail & Related papers (2021-04-15T13:37:13Z) - Unboxing Engagement in YouTube Influencer Videos: An Attention-Based Approach [0.3686808512438362]
"What is said" through words (text) is more important than "how it is said" through imagery (video images) or acoustics (audio) in predicting video engagement.<n>We analyze unstructured data from long-form YouTube influencer videos.
arXiv Detail & Related papers (2020-12-22T19:32:52Z) - A Blast From the Past: Personalizing Predictions of Video-Induced
Emotions using Personal Memories as Context [5.1314912554605066]
We show that automatic analysis of text describing their video-triggered memories can account for variation in viewers' emotional responses.
We discuss the relevance of these findings for improving on state of the art approaches to automated affective video analysis in personalized contexts.
arXiv Detail & Related papers (2020-08-27T13:06:10Z) - "Notic My Speech" -- Blending Speech Patterns With Multimedia [65.91370924641862]
We propose a view-temporal attention mechanism to model both the view dependence and the visemic importance in speech recognition and understanding.
Our proposed method outperformed the existing work by 4.99% in terms of the viseme error rate.
We show that there is a strong correlation between our model's understanding of multi-view speech and the human perception.
arXiv Detail & Related papers (2020-06-12T06:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.