YouTube and Science: Models for Research Impact
- URL: http://arxiv.org/abs/2209.02380v1
- Date: Thu, 1 Sep 2022 19:25:38 GMT
- Title: YouTube and Science: Models for Research Impact
- Authors: Abdul Rahman Shaikh, Hamed Alhoori, Maoyuan Sun
- Abstract summary: We created new datasets using YouTube videos and mentions of research articles on various online platforms.
We analyzed these datasets through statistical techniques and visualization, and built machine learning models to predict whether a research article is cited in videos.
According to our results, research articles mentioned in more tweets and news coverage have a higher chance of receiving video citations.
- Score: 1.237556184089774
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video communication has been rapidly increasing over the past decade, with
YouTube providing a medium where users can post, discover, share, and react to
videos. There has also been an increase in the number of videos citing research
articles, especially since it has become relatively commonplace for academic
conferences to require video submissions. However, the relationship between
research articles and YouTube videos is not clear, and the purpose of the
present paper is to address this issue. We created new datasets using YouTube
videos and mentions of research articles on various online platforms. We found
that most of the articles cited in the videos are related to medicine and
biochemistry. We analyzed these datasets through statistical techniques and
visualization, and built machine learning models to predict (1) whether a
research article is cited in videos, (2) whether a research article cited in a
video achieves a level of popularity, and (3) whether a video citing a research
article becomes popular. The best models achieved F1 scores between 80% and
94%. According to our results, research articles mentioned in more tweets and
news coverage have a higher chance of receiving video citations. We also found
that video views are important for predicting citations and increasing research
articles' popularity and public engagement with science.
Related papers
- Scientific discourse on YouTube: Motivations for citing research in comments [0.3277163122167434]
This study will provide insights on why individuals post links to research publications in comments.
We discovered that the primary motives for sharing research links were (1) providing more insights into the topic and (2) challenging information offered by other commentators.
arXiv Detail & Related papers (2024-05-21T13:50:02Z) - Amplifying Academic Research through YouTube: Engagement Metrics as Predictors of Citation Impact [0.0]
This study explores the interplay between YouTube engagement metrics and the academic impact of cited publications within video descriptions.
By analyzing data from Altmetric.com and YouTube's API, it assesses how YouTube video features relate to citation impact.
arXiv Detail & Related papers (2024-05-21T12:43:37Z) - A Survey on Video Diffusion Models [103.03565844371711]
The recent wave of AI-generated content (AIGC) has witnessed substantial success in computer vision.
Due to their impressive generative capabilities, diffusion models are gradually superseding methods based on GANs and auto-regressive Transformers.
This paper presents a comprehensive review of video diffusion models in the AIGC era.
arXiv Detail & Related papers (2023-10-16T17:59:28Z) - Video Timeline Modeling For News Story Understanding [123.03394373132353]
We present a novel problem, namely video timeline modeling.
Our objective is to create a video-associated timeline from a set of videos related to a specific topic, thereby facilitating the content and structure understanding of the story being told.
This problem has significant potential in various real-world applications, for instance, news story summarization.
arXiv Detail & Related papers (2023-09-23T18:24:15Z) - InternVideo: General Video Foundation Models via Generative and
Discriminative Learning [52.69422763715118]
We present general video foundation models, InternVideo, for dynamic and complex video-level understanding tasks.
InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives.
InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications.
arXiv Detail & Related papers (2022-12-06T18:09:49Z) - Machine Learning enabled models for YouTube Ranking Mechanism and Views
Prediction [4.460478321893019]
The proposed research work aims to identify and estimate the reach, popularity, and views of a YouTube video by using certain features using machine learning and AI techniques.
A ranking system would also be used keeping the trending videos in consideration.
arXiv Detail & Related papers (2022-11-15T18:06:30Z) - Quantifying the Online Long-Term Interest in Research [0.0]
Being cognizant of how long a research article is mentioned online could be valuable information to the researchers.
We analyzed multiple social media platforms on which users share and/or discuss scholarly articles.
Using the online social media metrics for each of these three clusters, we built machine learning models to predict the long-term online interest in research articles.
arXiv Detail & Related papers (2022-09-13T16:57:44Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - What is More Likely to Happen Next? Video-and-Language Future Event
Prediction [111.93601253692165]
Given a video with aligned dialogue, people can often infer what is more likely to happen next.
In this work, we explore whether AI models are able to learn to make such multimodal commonsense next-event predictions.
We collect a new dataset, named Video-and-Language Event Prediction, with 28,726 future event prediction examples.
arXiv Detail & Related papers (2020-10-15T19:56:47Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.