An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting
and Recent Behavior Changes
- URL: http://arxiv.org/abs/2203.13769v1
- Date: Fri, 25 Mar 2022 16:49:57 GMT
- Title: An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting
and Recent Behavior Changes
- Authors: Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro,
Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria
Bielikova
- Abstract summary: We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles.
Our key finding is that bursting of a filter bubble is possible, albeit it manifests differently from topic to topic.
Sadly, we did not find much improvements in misinformation occurrences, despite recent pledges by YouTube.
- Score: 0.6094711396431726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The negative effects of misinformation filter bubbles in adaptive systems
have been known to researchers for some time. Several studies investigated,
most prominently on YouTube, how fast a user can get into a misinformation
filter bubble simply by selecting wrong choices from the items offered. Yet, no
studies so far have investigated what it takes to burst the bubble, i.e.,
revert the bubble enclosure. We present a study in which pre-programmed agents
(acting as YouTube users) delve into misinformation filter bubbles by watching
misinformation promoting content (for various topics). Then, by watching
misinformation debunking content, the agents try to burst the bubbles and reach
more balanced recommendation mixes. We recorded the search results and
recommendations, which the agents encountered, and analyzed them for the
presence of misinformation. Our key finding is that bursting of a filter bubble
is possible, albeit it manifests differently from topic to topic. Moreover, we
observe that filter bubbles do not truly appear in some situations. We also
draw a direct comparison with a previous study. Sadly, we did not find much
improvements in misinformation occurrences, despite recent pledges by YouTube.
Related papers
- Explainability and Hate Speech: Structured Explanations Make Social Media Moderators Faster [72.84926097773578]
We investigate the effect of explanations on the speed of real-world moderators.
Our experiments show that while generic explanations do not affect their speed and are often ignored, structured explanations lower moderators' decision making time by 7.4%.
arXiv Detail & Related papers (2024-06-06T14:23:10Z) - Uncovering the Deep Filter Bubble: Narrow Exposure in Short-Video
Recommendation [30.395376392259497]
Filter bubbles have been studied extensively within the context of online content platforms.
With the rise of short-video platforms, the filter bubble has been given extra attention.
arXiv Detail & Related papers (2024-03-07T14:14:40Z) - Filter Bubbles in Recommender Systems: Fact or Fallacy -- A Systematic
Review [7.121051191777698]
A filter bubble refers to the phenomenon where Internet customization effectively isolates individuals from diverse opinions or materials.
We conduct a systematic literature review on the topic of filter bubbles in recommender systems.
We propose mechanisms to mitigate the impact of filter bubbles and demonstrate that incorporating diversity into recommendations can potentially help alleviate this issue.
arXiv Detail & Related papers (2023-07-02T13:41:42Z) - Auditing YouTube's Recommendation Algorithm for Misinformation Filter
Bubbles [0.5898451150401338]
We present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble.
We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content.
Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content.
arXiv Detail & Related papers (2022-10-18T18:27:47Z) - Mitigating Filter Bubbles within Deep Recommender Systems [2.3590112541068575]
recommender systems have been known to intellectually isolate users from a variety of perspectives, or cause filter bubbles.
We characterize and mitigate this filter bubble effect by classifying various datapoints based on their user-item interaction history.
We mitigate this filter bubble effect without compromising accuracy by carefully retraining our recommender system.
arXiv Detail & Related papers (2022-09-16T22:00:10Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - LFW-Beautified: A Dataset of Face Images with Beautification and
Augmented Reality Filters [53.180678723280145]
We contribute with a database of facial images that includes several manipulations.
It includes image enhancement filters (which mostly modify contrast and lightning) and augmented reality filters that incorporate items like animal noses or glasses.
Each dataset contains 4,324 images of size 64 x 64, with a total of 34,592 images.
arXiv Detail & Related papers (2022-03-11T17:05:10Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z) - Can Celebrities Burst Your Bubble? [2.6919164079336992]
Using a state-of-the art model that quantifies the degree of polarization, this paper makes a first attempt to empirically answer the question: Can celebrities burst filter bubbles?
We use a case study to analyze how people react when celebrities are involved in a controversial topic and conclude with a list possible research directions.
arXiv Detail & Related papers (2020-03-15T15:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.