Auditing YouTube's Recommendation Algorithm for Misinformation Filter
Bubbles
- URL: http://arxiv.org/abs/2210.10085v1
- Date: Tue, 18 Oct 2022 18:27:47 GMT
- Title: Auditing YouTube's Recommendation Algorithm for Misinformation Filter
Bubbles
- Authors: Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko,
Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian
Gavornik, Maria Bielikova
- Abstract summary: We present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble.
We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content.
Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content.
- Score: 0.5898451150401338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present results of an auditing study performed over YouTube
aimed at investigating how fast a user can get into a misinformation filter
bubble, but also what it takes to "burst the bubble", i.e., revert the bubble
enclosure. We employ a sock puppet audit methodology, in which pre-programmed
agents (acting as YouTube users) delve into misinformation filter bubbles by
watching misinformation promoting content. Then they try to burst the bubbles
and reach more balanced recommendations by watching misinformation debunking
content. We record search results, home page results, and recommendations for
the watched videos. Overall, we recorded 17,405 unique videos, out of which we
manually annotated 2,914 for the presence of misinformation. The labeled data
was used to train a machine learning model classifying videos into three
classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the
trained model to classify the remaining videos that would not be feasible to
annotate manually.
Using both the manually and automatically annotated data, we observe the
misinformation bubble dynamics for a range of audited topics. Our key finding
is that even though filter bubbles do not appear in some situations, when they
do, it is possible to burst them by watching misinformation debunking content
(albeit it manifests differently from topic to topic). We also observe a sudden
decrease of misinformation filter bubble effect when misinformation debunking
videos are watched after misinformation promoting videos, suggesting a strong
contextuality of recommendations. Finally, when comparing our results with a
previous similar study, we do not observe significant improvements in the
overall quantity of recommended misinformation content.
Related papers
- Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video
Recommendation [30.395376392259497]
Filter bubbles have been studied extensively within the context of online content platforms.
With the rise of short-video platforms, the filter bubble has been given extra attention.
arXiv Detail & Related papers (2024-03-07T14:14:40Z) - The Potential of Vision-Language Models for Content Moderation of
Children's Videos [1.0589208420411014]
This paper presents an in depth analysis of how context-specific language prompts affect content moderation performance.
It is important to include more context in content moderation prompts, particularly for cartoon videos.
arXiv Detail & Related papers (2023-12-06T22:29:16Z) - Causalainer: Causal Explainer for Automatic Video Summarization [77.36225634727221]
In many application scenarios, improper video summarization can have a large impact.
Modeling explainability is a key concern.
A Causal Explainer, dubbed Causalainer, is proposed to address this issue.
arXiv Detail & Related papers (2023-04-30T11:42:06Z) - An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting
and Recent Behavior Changes [0.6094711396431726]
We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles.
Our key finding is that bursting of a filter bubble is possible, albeit it manifests differently from topic to topic.
Sadly, we did not find much improvements in misinformation occurrences, despite recent pledges by YouTube.
arXiv Detail & Related papers (2022-03-25T16:49:57Z) - Look for the Change: Learning Object States and State-Modifying Actions
from Untrimmed Web Videos [55.60442251060871]
Human actions often induce changes of object states such as "cutting an apple" or "pouring coffee"
We develop a self-supervised model for jointly learning state-modifying actions together with the corresponding object states.
To cope with noisy uncurated training data, our model incorporates a noise adaptive weighting module supervised by a small number of annotated still images.
arXiv Detail & Related papers (2022-03-22T11:45:10Z) - Misinformation Detection on YouTube Using Video Captions [6.503828590815483]
This work proposes an approach that uses state-of-the-art NLP techniques to extract features from video captions (subtitles)
To evaluate our approach, we utilize a publicly accessible and labeled dataset for classifying videos as misinformation or not.
arXiv Detail & Related papers (2021-07-02T10:02:36Z) - What's wrong with this video? Comparing Explainers for Deepfake
Detection [13.089182408360221]
Deepfakes are computer manipulated videos where the face of an individual has been replaced with that of another.
In this work we develop, extend and compare white-box, black-box and model-specific techniques for explaining the labelling of real and fake videos.
In particular, we adapt SHAP, GradCAM and self-attention models to the task of explaining the predictions of state-of-the-art detectors based on EfficientNet.
arXiv Detail & Related papers (2021-05-12T18:44:39Z) - Watch and Learn: Mapping Language and Noisy Real-world Videos with
Self-supervision [54.73758942064708]
We teach machines to understand visuals and natural language by learning the mapping between sentences and noisy video snippets without explicit annotations.
For training and evaluation, we contribute a new dataset ApartmenTour' that contains a large number of online videos and subtitles.
arXiv Detail & Related papers (2020-11-19T03:43:56Z) - Regularized Two-Branch Proposal Networks for Weakly-Supervised Moment
Retrieval in Videos [108.55320735031721]
Video moment retrieval aims to localize the target moment in a video according to the given sentence.
Most existing weak-supervised methods apply a MIL-based framework to develop inter-sample confrontment.
We propose a novel Regularized Two-Branch Proposal Network to simultaneously consider the inter-sample and intra-sample confrontments.
arXiv Detail & Related papers (2020-08-19T04:42:46Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.