A Data-Driven Approach for Finding Requirements Relevant Feedback from
TikTok and YouTube
- URL: http://arxiv.org/abs/2305.01796v4
- Date: Mon, 24 Jul 2023 21:35:19 GMT
- Title: A Data-Driven Approach for Finding Requirements Relevant Feedback from
TikTok and YouTube
- Authors: Manish Sihag, Ze Shi Li, Amanda Dash, Nowshin Nawar Arony, Kezia
Devathasan, Neil Ernst, Alexandra Albu, Daniela Damian
- Abstract summary: This study delves into the potential of TikTok and YouTube, two widely used social media platforms that focus on video content.
We evaluated the prospect of videos as a source of user feedback by analyzing audio and visual text, and metadata (i.e., description/title) from 6276 videos of 20 popular products across various industries.
We found that product ratings (feature, design, performance), bug reports, and usage tutorial are persistent themes from the videos.
- Score: 37.87427796354001
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The increasing importance of videos as a medium for engagement,
communication, and content creation makes them critical for organizations to
consider for user feedback. However, sifting through vast amounts of video
content on social media platforms to extract requirements-relevant feedback is
challenging. This study delves into the potential of TikTok and YouTube, two
widely used social media platforms that focus on video content, in identifying
relevant user feedback that may be further refined into requirements using
subsequent requirement generation steps. We evaluated the prospect of videos as
a source of user feedback by analyzing audio and visual text, and metadata
(i.e., description/title) from 6276 videos of 20 popular products across
various industries. We employed state-of-the-art deep learning
transformer-based models, and classified 3097 videos consisting of requirements
relevant information. We then clustered relevant videos and found multiple
requirements relevant feedback themes for each of the 20 products. This
feedback can later be refined into requirements artifacts. We found that
product ratings (feature, design, performance), bug reports, and usage tutorial
are persistent themes from the videos. Video-based social media such as TikTok
and YouTube can provide valuable user insights, making them a powerful and
novel resource for companies to improve customer-centric development.
Related papers
- HOTVCOM: Generating Buzzworthy Comments for Videos [49.39846630199698]
This study introduces textscHotVCom, the largest Chinese video hot-comment dataset, comprising 94k diverse videos and 137 million comments.
We also present the textttComHeat framework, which synergistically integrates visual, auditory, and textual data to generate influential hot-comments on the Chinese video dataset.
arXiv Detail & Related papers (2024-09-23T16:45:13Z) - Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video MLLMs [20.168429351519055]
Video understanding is a crucial next step for multimodal large language models (LMLMs)
We propose VideoNIAH (Video Needle In A Haystack), a benchmark construction framework through synthetic video generation.
We conduct a comprehensive evaluation of both proprietary and open-source models, uncovering significant differences in their video understanding capabilities.
arXiv Detail & Related papers (2024-06-13T17:50:05Z) - Detours for Navigating Instructional Videos [58.1645668396789]
We propose VidDetours, a video-language approach that learns to retrieve the targeted temporal segments from a large repository of how-to's.
We show our model's significant improvements over best available methods for video retrieval and question answering, with recall rates exceeding the state of the art by 35%.
arXiv Detail & Related papers (2024-01-03T16:38:56Z) - Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating
Video-based Large Language Models [81.84810348214113]
Video-based large language models (Video-LLMs) have been recently introduced, targeting both fundamental improvements in perception and comprehension, and a diverse range of user inquiries.
To guide the development of such a model, the establishment of a robust and comprehensive evaluation system becomes crucial.
This paper proposes textitVideo-Bench, a new comprehensive benchmark along with a toolkit specifically designed for evaluating Video-LLMs.
arXiv Detail & Related papers (2023-11-27T18:59:58Z) - VTC: Improving Video-Text Retrieval with User Comments [22.193221760244707]
This paper introduces a new dataset of videos, titles and comments.
By using comments, our method is able to learn better, more contextualised, representations for image, video and audio representations.
arXiv Detail & Related papers (2022-10-19T18:11:39Z) - CLUE: Contextualised Unified Explainable Learning of User Engagement in
Video Lectures [6.25256391074865]
We propose a new unified model, CLUE, which learns from the features extracted from public online teaching videos.
Our model exploits various multi-modal features to model the complexity of language, context information, textual emotion of the delivered content.
arXiv Detail & Related papers (2022-01-14T19:51:06Z) - APES: Audiovisual Person Search in Untrimmed Video [87.4124877066541]
We present the Audiovisual Person Search dataset (APES)
APES contains over 1.9K identities labeled along 36 hours of video.
A key property of APES is that it includes dense temporal annotations that link faces to speech segments of the same identity.
arXiv Detail & Related papers (2021-06-03T08:16:42Z) - Comprehensive Information Integration Modeling Framework for Video
Titling [124.11296128308396]
We integrate comprehensive sources of information, including the content of consumer-generated videos, the narrative comment sentences supplied by consumers, and the product attributes, in an end-to-end modeling framework.
To tackle this issue, the proposed method consists of two processes, i.e., granular-level interaction modeling and abstraction-level story-line summarization.
We collect a large-scale dataset accordingly from real-world data in Taobao, a world-leading e-commerce platform.
arXiv Detail & Related papers (2020-06-24T10:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.