Quantifying the Academic Quality of Children's Videos using Machine
Comprehension
- URL: http://arxiv.org/abs/2303.17201v2
- Date: Tue, 6 Feb 2024 04:39:48 GMT
- Title: Quantifying the Academic Quality of Children's Videos using Machine
Comprehension
- Authors: Sumeet Kumar, Mallikarjuna T., Ashiqur Khudabukhsh
- Abstract summary: This research focuses on learning in terms of what's taught in schools.
It proposes a way to measure the academic quality of children's videos.
- Score: 2.5091819952713057
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: YouTube Kids (YTK) is one of the most popular kids' applications used by
millions of kids daily. However, various studies have highlighted concerns
about the videos on the platform, like the over-presence of entertaining and
commercial content. YouTube recently proposed high-quality guidelines that
include `promoting learning' and proposed to use it in ranking channels.
However, the concept of learning is multi-faceted, and it can be difficult to
define and measure in the context of online videos. This research focuses on
learning in terms of what's taught in schools and proposes a way to measure the
academic quality of children's videos. Using a new dataset of questions and
answers from children's videos, we first show that a Reading Comprehension (RC)
model can estimate academic learning. Then, using a large dataset of middle
school textbook questions on diverse topics, we quantify the academic quality
of top channels as the number of children's textbook questions that an RC model
can correctly answer. By analyzing over 80,000 videos posted on the top 100
channels, we present the first thorough analysis of the academic quality of
channels on YTK.
Related papers
- VCEval: Rethinking What is a Good Educational Video and How to Automatically Evaluate It [46.67441830344145]
We focus on the task of automatically evaluating the quality of video course content.
We propose three evaluation principles and design a new evaluation framework, textitVCEval, based on these principles.
Our method effectively distinguishes video courses of different content quality and produces a range of interpretable results.
arXiv Detail & Related papers (2024-06-15T13:18:30Z) - TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen [59.41896032227508]
We make publicly available a novel dataset comprised of images from TV series released post- 2021.
This dataset holds significant potential for use in various research areas, including the evaluation of incremental learning.
arXiv Detail & Related papers (2024-04-16T17:47:45Z) - Self-Supervised Learning for Videos: A Survey [70.37277191524755]
Self-supervised learning has shown promise in both image and video domains.
In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain.
arXiv Detail & Related papers (2022-06-18T00:26:52Z) - Subjective and Objective Analysis of Streamed Gaming Videos [60.32100758447269]
We study subjective and objective Video Quality Assessment (VQA) models on gaming videos.
We created a novel gaming video video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real gaming videos.
We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
arXiv Detail & Related papers (2022-03-24T03:02:57Z) - Video Question Answering: Datasets, Algorithms and Challenges [99.9179674610955]
Video Question Answering (VideoQA) aims to answer natural language questions according to the given videos.
This paper provides a clear taxonomy and comprehensive analyses to VideoQA, focusing on the datasets, algorithms, and unique challenges.
arXiv Detail & Related papers (2022-03-02T16:34:09Z) - NEWSKVQA: Knowledge-Aware News Video Question Answering [5.720640816755851]
We explore a new frontier in video question answering: answering knowledge-based questions in the context of news videos.
We curate a new dataset of 12K news videos spanning across 156 hours with 1M multiple-choice question-answer pairs covering 8263 unique entities.
We propose a novel approach, NEWSKVQA which performs multi-modal inferencing over textual multiple-choice questions, videos, their transcripts and knowledge base.
arXiv Detail & Related papers (2022-02-08T17:31:31Z) - VALUE: A Multi-Task Benchmark for Video-and-Language Understanding
Evaluation [124.02278735049235]
VALUE benchmark aims to cover a broad range of video genres, video lengths, data volumes, and task difficulty levels.
We evaluate various baseline methods with and without large-scale VidL pre-training.
The significant gap between our best model and human performance calls for future study for advanced VidL models.
arXiv Detail & Related papers (2021-06-08T18:34:21Z) - VLEngagement: A Dataset of Scientific Video Lectures for Evaluating
Population-based Engagement [23.078055803229912]
Video lectures have become one of the primary modalities to impart knowledge to masses in the current digital age.
There is still an important need for data and research aimed at understanding learner engagement with scientific video lectures.
This paper introduces VLEngagement, a novel dataset that consists of content-based and video-specific features extracted from publicly available scientific video lectures.
arXiv Detail & Related papers (2020-11-02T14:20:19Z) - Classification of Important Segments in Educational Videos using
Multimodal Features [10.175871202841346]
We propose a multimodal neural architecture that utilizes state-of-the-art audio, visual and textual features.
Our experiments investigate the impact of visual and temporal information, as well as the combination of multimodal features on importance prediction.
arXiv Detail & Related papers (2020-10-26T14:40:23Z) - A Clustering-Based Method for Automatic Educational Video Recommendation
Using Deep Face-Features of Lecturers [0.0]
This paper presents a method for generating educational video recommendation using deep face-features of lecturers without identifying them.
We use an unsupervised face clustering mechanism to create relations among the videos based on the lecturer's presence.
We rank these recommended videos based on the amount of time the referenced lecturers were present.
arXiv Detail & Related papers (2020-10-09T16:53:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.