Scientific discourse on YouTube: Motivations for citing research in comments
- URL: http://arxiv.org/abs/2405.12798v1
- Date: Tue, 21 May 2024 13:50:02 GMT
- Title: Scientific discourse on YouTube: Motivations for citing research in comments
- Authors: Sören Striewski, Olga Zagovora, Isabella Peters,
- Abstract summary: This study will provide insights on why individuals post links to research publications in comments.
We discovered that the primary motives for sharing research links were (1) providing more insights into the topic and (2) challenging information offered by other commentators.
- Score: 0.3277163122167434
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: YouTube is a valuable source of user-generated content on a wide range of topics, and it encourages user participation through the use of a comment system. Video content is increasingly addressing scientific topics, and there is evidence that both academics and consumers use video descriptions and video comments to refer to academic research and scientific publications. Because commenting is a discursive behavior, this study will provide insights on why individuals post links to research publications in comments. For this, a qualitative content analysis and iterative coding approach were applied. Furthermore, the reasons for mentioning academic publications in comments were contrasted with the reasons for citing in scholarly works and with reasons for commenting on YouTube. We discovered that the primary motives for sharing research links were (1) providing more insights into the topic and (2) challenging information offered by other commentators.
Related papers
- Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster [72.84926097773578]
We investigate the effect of explanations on the speed of real-world moderators.
Our experiments show that while generic explanations do not affect their speed and are often ignored, structured explanations lower moderators' decision making time by 7.4%.
arXiv Detail & Related papers (2024-06-06T14:23:10Z) - Amplifying Academic Research through YouTube: Engagement Metrics as Predictors of Citation Impact [0.0]
This study explores the interplay between YouTube engagement metrics and the academic impact of cited publications within video descriptions.
By analyzing data from Altmetric.com and YouTube's API, it assesses how YouTube video features relate to citation impact.
arXiv Detail & Related papers (2024-05-21T12:43:37Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - ViCo: Engaging Video Comment Generation with Human Preference Rewards [68.50351391812723]
We propose ViCo with three novel designs to tackle the challenges for generating engaging Video Comments.
To quantify the engagement of comments, we utilize the number of "likes" each comment receives as a proxy of human preference.
To automatically evaluate the engagement of comments, we train a reward model to align its judgment to the above proxy.
arXiv Detail & Related papers (2023-08-22T04:01:01Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - YouTube and Science: Models for Research Impact [1.237556184089774]
We created new datasets using YouTube videos and mentions of research articles on various online platforms.
We analyzed these datasets through statistical techniques and visualization, and built machine learning models to predict whether a research article is cited in videos.
According to our results, research articles mentioned in more tweets and news coverage have a higher chance of receiving video citations.
arXiv Detail & Related papers (2022-09-01T19:25:38Z) - Classifying YouTube Comments Based on Sentiment and Type of Sentence [0.0]
We address the challenge of text extraction and classification from YouTube comments using well-known statistical measures and machine learning models.
The results show that our approach that incorporates conventional methods performs well on the classification task, validating its potential in assisting content creators increase viewer engagement on their channel.
arXiv Detail & Related papers (2021-10-31T18:08:10Z) - The Potential of Using Vision Videos for CrowdRE: Video Comments as a
Source of Feedback [0.8594140167290097]
We analyze and assess the potential of using vision videos for CrowdRE.
In a case study, we analyzed 4505 comments on a vision video from YouTube.
We conclude that the use of vision videos for CrowdRE has a large potential.
arXiv Detail & Related papers (2021-08-04T14:18:27Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Are Top School Students More Critical of Their Professors? Mining
Comments on RateMyProfessor.com [83.2634062100579]
Student reviews and comments on RateMyProfessor.com reflect realistic learning experiences of students.
Our study proves that student reviews and comments contain crucial information and can serve as essential references for enrollment in courses and universities.
arXiv Detail & Related papers (2021-01-23T20:01:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.