The Potential of Using Vision Videos for CrowdRE: Video Comments as a
Source of Feedback
- URL: http://arxiv.org/abs/2108.02076v1
- Date: Wed, 4 Aug 2021 14:18:27 GMT
- Title: The Potential of Using Vision Videos for CrowdRE: Video Comments as a
Source of Feedback
- Authors: Oliver Karras, Eklekta Kristo, Jil Kl\"under
- Abstract summary: We analyze and assess the potential of using vision videos for CrowdRE.
In a case study, we analyzed 4505 comments on a vision video from YouTube.
We conclude that the use of vision videos for CrowdRE has a large potential.
- Score: 0.8594140167290097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision videos are established for soliciting feedback and stimulating
discussions in requirements engineering (RE) practices, such as focus groups.
Different researchers motivated the transfer of these benefits into crowd-based
RE (CrowdRE) by using vision videos on social media platforms. So far, however,
little research explored the potential of using vision videos for CrowdRE in
detail. In this paper, we analyze and assess this potential, in particular,
focusing on video comments as a source of feedback. In a case study, we
analyzed 4505 comments on a vision video from YouTube. We found that the video
solicited 2770 comments from 2660 viewers in four days. This is more than 50%
of all comments the video received in four years. Even though only a certain
fraction of these comments are relevant to RE, the relevant comments address
typical intentions and topics of user feedback, such as feature request or
problem report. Besides the typical user feedback categories, we found more
than 300 comments that address the topic safety, which has not appeared in
previous analyses of user feedback. In an automated analysis, we compared the
performance of three machine learning algorithms on classifying the video
comments. Despite certain differences, the algorithms classified the video
comments well. Based on these findings, we conclude that the use of vision
videos for CrowdRE has a large potential. Despite the preliminary nature of the
case study, we are optimistic that vision videos can motivate stakeholders to
actively participate in a crowd and solicit numerous of video comments as a
valuable source of feedback.
Related papers
- HOTVCOM: Generating Buzzworthy Comments for Videos [49.39846630199698]
This study introduces textscHotVCom, the largest Chinese video hot-comment dataset, comprising 94k diverse videos and 137 million comments.
We also present the textttComHeat framework, which synergistically integrates visual, auditory, and textual data to generate influential hot-comments on the Chinese video dataset.
arXiv Detail & Related papers (2024-09-23T16:45:13Z) - Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline [30.379212611361893]
Existing video multi-modal sentiment analysis mainly focuses on the sentiment expression of people within the video, yet often neglects the induced sentiment of viewers while watching the videos.
We propose Multi-modal Sentiment Analysis for Comment Response of Video Induced (MSA-CRVI) to inferring opinions and emotions according to the comments response to micro video.
It is the largest video multi-modal sentiment dataset in terms of scale and video duration to our knowledge, containing 107,267 comments and 8,210 micro videos with a video duration of 68.83 hours.
arXiv Detail & Related papers (2024-05-15T10:24:54Z) - ViCo: Engaging Video Comment Generation with Human Preference Rewards [68.50351391812723]
We propose ViCo with three novel designs to tackle the challenges for generating engaging Video Comments.
To quantify the engagement of comments, we utilize the number of "likes" each comment receives as a proxy of human preference.
To automatically evaluate the engagement of comments, we train a reward model to align its judgment to the above proxy.
arXiv Detail & Related papers (2023-08-22T04:01:01Z) - FunQA: Towards Surprising Video Comprehension [64.58663825184958]
We introduce FunQA, a challenging video question-answering dataset.
FunQA covers three previously unexplored types of surprising videos: HumorQA, CreativeQA, and MagicQA.
In total, the FunQA benchmark consists of 312K free-text QA pairs derived from 4.3K video clips.
arXiv Detail & Related papers (2023-06-26T17:59:55Z) - A Data-Driven Approach for Finding Requirements Relevant Feedback from
TikTok and YouTube [37.87427796354001]
This study delves into the potential of TikTok and YouTube, two widely used social media platforms that focus on video content.
We evaluated the prospect of videos as a source of user feedback by analyzing audio and visual text, and metadata (i.e., description/title) from 6276 videos of 20 popular products across various industries.
We found that product ratings (feature, design, performance), bug reports, and usage tutorial are persistent themes from the videos.
arXiv Detail & Related papers (2023-05-02T21:47:06Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - YouTube Ad View Sentiment Analysis using Deep Learning and Machine
Learning [0.0]
This research predicts YouTube Ad view sentiments using Deep Learning and Machine Learning algorithms like Linear Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), and Artificial Neural Network (ANN)
arXiv Detail & Related papers (2022-05-23T06:55:34Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - Classifying YouTube Comments Based on Sentiment and Type of Sentence [0.0]
We address the challenge of text extraction and classification from YouTube comments using well-known statistical measures and machine learning models.
The results show that our approach that incorporates conventional methods performs well on the classification task, validating its potential in assisting content creators increase viewer engagement on their channel.
arXiv Detail & Related papers (2021-10-31T18:08:10Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.