Viblio: Introducing Credibility Signals and Citations to Video-Sharing
Platforms
- URL: http://arxiv.org/abs/2402.17218v1
- Date: Tue, 27 Feb 2024 05:21:39 GMT
- Title: Viblio: Introducing Credibility Signals and Citations to Video-Sharing
Platforms
- Authors: Emelia Hughes, Renee Wang, Prerna Juneja, Tony Li, Tanu Mitra, Amy
Zhang
- Abstract summary: Viblio is a prototype system that enables YouTube users to view and add citations while watching a video based on participants' needs.
From an evaluation with 12 people, all participants found Viblio to be intuitive and useful in the process of evaluating a video's credibility.
- Score: 8.832571289776256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As more users turn to video-sharing platforms like YouTube as an information
source, they may consume misinformation despite their best efforts. In this
work, we investigate ways that users can better assess the credibility of
videos by first exploring how users currently determine credibility using
existing signals on platforms and then by introducing and evaluating new
credibility-based signals. We conducted 12 contextual inquiry interviews with
YouTube users, determining that participants used a combination of existing
signals, such as the channel name, the production quality, and prior knowledge,
to evaluate credibility, yet sometimes stumbled in their efforts to do so. We
then developed Viblio, a prototype system that enables YouTube users to view
and add citations and related information while watching a video based on our
participants' needs. From an evaluation with 12 people, all participants found
Viblio to be intuitive and useful in the process of evaluating a video's
credibility and could see themselves using Viblio in the future.
Related papers
- Personalized Video Summarization by Multimodal Video Understanding [2.1372652192505703]
We present a pipeline called Video Summarization with Language (VSL) for user-preferred video summarization.
VSL is based on pre-trained visual language models (VLMs) to avoid the need to train a video summarization system on a large training dataset.
We show that our method is more adaptable across different datasets compared to supervised query-based video summarization models.
arXiv Detail & Related papers (2024-11-05T22:14:35Z) - HOTVCOM: Generating Buzzworthy Comments for Videos [49.39846630199698]
This study introduces textscHotVCom, the largest Chinese video hot-comment dataset, comprising 94k diverse videos and 137 million comments.
We also present the textttComHeat framework, which synergistically integrates visual, auditory, and textual data to generate influential hot-comments on the Chinese video dataset.
arXiv Detail & Related papers (2024-09-23T16:45:13Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
Current methods for skill-assessment from video only provide scores or compare demonstrations.
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity.
Our method is able to reason across multi-modal input combinations to output full-spectrum, actionable coaching.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - Detours for Navigating Instructional Videos [58.1645668396789]
We propose VidDetours, a video-language approach that learns to retrieve the targeted temporal segments from a large repository of how-to's.
We show our model's significant improvements over best available methods for video retrieval and question answering, with recall rates exceeding the state of the art by 35%.
arXiv Detail & Related papers (2024-01-03T16:38:56Z) - Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating
Video-based Large Language Models [81.84810348214113]
Video-based large language models (Video-LLMs) have been recently introduced, targeting both fundamental improvements in perception and comprehension, and a diverse range of user inquiries.
To guide the development of such a model, the establishment of a robust and comprehensive evaluation system becomes crucial.
This paper proposes textitVideo-Bench, a new comprehensive benchmark along with a toolkit specifically designed for evaluating Video-LLMs.
arXiv Detail & Related papers (2023-11-27T18:59:58Z) - VideoChat: Chat-Centric Video Understanding [80.63932941216129]
We develop an end-to-end chat-centric video understanding system, coined as VideoChat.
It integrates video foundation models and large language models via a learnable neural interface.
Preliminary qualitative experiments demonstrate the potential of our system across a broad spectrum of video applications.
arXiv Detail & Related papers (2023-05-10T17:59:04Z) - A Data-Driven Approach for Finding Requirements Relevant Feedback from
TikTok and YouTube [37.87427796354001]
This study delves into the potential of TikTok and YouTube, two widely used social media platforms that focus on video content.
We evaluated the prospect of videos as a source of user feedback by analyzing audio and visual text, and metadata (i.e., description/title) from 6276 videos of 20 popular products across various industries.
We found that product ratings (feature, design, performance), bug reports, and usage tutorial are persistent themes from the videos.
arXiv Detail & Related papers (2023-05-02T21:47:06Z) - Analyzing User Engagement with TikTok's Short Format Video Recommendations using Data Donations [31.764672446151412]
We analyze user engagement on TikTok using data we collect via a data donation system.
We find that the average daily usage time increases over the users' lifetime while the user attention remains stable at around 45%.
We also find that users like more videos uploaded by people they follow than those recommended by people they do not follow.
arXiv Detail & Related papers (2023-01-12T11:34:45Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - Designing Indicators to Combat Fake Media [24.257090478689815]
This research designs and investigates the use of provenance indicators to help users identify fake videos.
We first interview users regarding their experiences with different misinformation modes.
Then, we conduct a participatory design study to develop and design fake video indicators.
arXiv Detail & Related papers (2020-10-01T16:58:12Z) - Middle-Aged Video Consumers' Beliefs About Algorithmic Recommendations
on YouTube [2.8325478162326885]
We conduct semi-structured interviews with middle-aged YouTube video consumers to analyze user beliefs about the video recommendation system.
We identify four groups of user beliefs: Previous Actions, Social Media, Recommender System, and Company Policy.
We propose a framework to distinguish the four main actors that users believe influence their video recommendations.
arXiv Detail & Related papers (2020-08-07T14:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.