"It is just a flu": Assessing the Effect of Watch History on YouTube's
Pseudoscientific Video Recommendations
- URL: http://arxiv.org/abs/2010.11638v5
- Date: Tue, 12 Oct 2021 23:03:38 GMT
- Title: "It is just a flu": Assessing the Effect of Watch History on YouTube's
Pseudoscientific Video Recommendations
- Authors: Kostantinos Papadamou and Savvas Zannettou and Jeremy Blackburn and
Emiliano De Cristofaro and Gianluca Stringhini and Michael Sirivianos
- Abstract summary: We collect 6.6K videos related to COVID-19, the Flat Earth theory, as well as the anti-vaccination and anti-mask movements.
Using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant.
We quantify user exposure to this content on various parts of the platform and how this exposure changes based on the user's watch history.
- Score: 13.936247103754905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The role played by YouTube's recommendation algorithm in unwittingly
promoting misinformation and conspiracy theories is not entirely understood.
Yet, this can have dire real-world consequences, especially when
pseudoscientific content is promoted to users at critical times, such as the
COVID-19 pandemic. In this paper, we set out to characterize and detect
pseudoscientific misinformation on YouTube. We collect 6.6K videos related to
COVID-19, the Flat Earth theory, as well as the anti-vaccination and anti-mask
movements. Using crowdsourcing, we annotate them as pseudoscience, legitimate
science, or irrelevant and train a deep learning classifier to detect
pseudoscientific videos with an accuracy of 0.79.
We quantify user exposure to this content on various parts of the platform
and how this exposure changes based on the user's watch history. We find that
YouTube suggests more pseudoscientific content regarding traditional
pseudoscientific topics (e.g., flat earth, anti-vaccination) than for emerging
ones (like COVID-19). At the same time, these recommendations are more common
on the search results page than on a user's homepage or in the recommendation
section when actively watching videos. Finally, we shed light on how a user's
watch history substantially affects the type of recommended videos.
Related papers
- "Here's Your Evidence": False Consensus in Public Twitter Discussions of COVID-19 Science [50.08057052734799]
We estimate scientific consensus based on samples of abstracts from preprint servers.
We find that anti-consensus posts and users, though overall less numerous than pro-consensus ones, are vastly over-represented on Twitter.
arXiv Detail & Related papers (2024-01-24T06:16:57Z) - How to Train Your YouTube Recommender to Avoid Unwanted Videos [51.6864681332515]
"Not interested" and "Don't recommend channel" buttons allow users to indicate disinterest when presented with unwanted recommendations.
We simulated YouTube users with sock puppet agents.
We found that the "Not interested" button worked best, significantly reducing such recommendations in all topics tested.
arXiv Detail & Related papers (2023-07-27T00:21:29Z) - Auditing YouTube's Recommendation Algorithm for Misinformation Filter
Bubbles [0.5898451150401338]
We present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble.
We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content.
Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content.
arXiv Detail & Related papers (2022-10-18T18:27:47Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - YouTube and Science: Models for Research Impact [1.237556184089774]
We created new datasets using YouTube videos and mentions of research articles on various online platforms.
We analyzed these datasets through statistical techniques and visualization, and built machine learning models to predict whether a research article is cited in videos.
According to our results, research articles mentioned in more tweets and news coverage have a higher chance of receiving video citations.
arXiv Detail & Related papers (2022-09-01T19:25:38Z) - Deep Learning-Based Sentiment Analysis of COVID-19 Vaccination Responses
from Twitter Data [2.6256839599007273]
This study will help everyone understand public opinion on the COVID-19 vaccines and impact the aim of eradicating the Coronavirus from our beautiful world.
Social media is currently the best way to express feelings and emotions, and with the help of it, specifically Twitter, one can have a better idea of what is trending and what is going on in people minds.
arXiv Detail & Related papers (2022-08-26T18:07:37Z) - "COVID-19 was a FIFA conspiracy #curropt": An Investigation into the
Viral Spread of COVID-19 Misinformation [60.268682953952506]
We estimate the extent to which misinformation has influenced the course of the COVID-19 pandemic using natural language processing models.
We provide a strategy to combat social media posts that are likely to cause widespread harm.
arXiv Detail & Related papers (2022-06-12T19:41:01Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z) - A Longitudinal Analysis of YouTube's Promotion of Conspiracy Videos [14.867862489411868]
Conspiracy theories have flourished on social media, raising concerns that such content is fueling the spread of disinformation, supporting extremist ideologies, and in some cases, leading to violence.
Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized.
We have developed a classifier for automatically determining if a video is conspiratorial.
arXiv Detail & Related papers (2020-03-06T17:31:30Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.