YouTube, The Great Radicalizer? Auditing and Mitigating Ideological
Biases in YouTube Recommendations
- URL: http://arxiv.org/abs/2203.10666v2
- Date: Fri, 25 Mar 2022 01:36:47 GMT
- Title: YouTube, The Great Radicalizer? Auditing and Mitigating Ideological
Biases in YouTube Recommendations
- Authors: Muhammad Haroon, Anshuman Chhabra, Xin Liu, Prasant Mohapatra, Zubair
Shafiq, Magdalena Wojcieszak
- Abstract summary: We conduct a systematic audit of YouTube's recommendation system using a hundred thousand sock puppets.
We find that YouTube's recommendations do direct users -- especially right-leaning users -- to ideologically biased and increasingly radical content.
Our intervention effectively mitigates the observed bias, leading to more recommendations to ideologically neutral, diverse, and dissimilar content.
- Score: 20.145485714154933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommendations algorithms of social media platforms are often criticized for
placing users in "rabbit holes" of (increasingly) ideologically biased content.
Despite these concerns, prior evidence on this algorithmic radicalization is
inconsistent. Furthermore, prior work lacks systematic interventions that
reduce the potential ideological bias in recommendation algorithms. We conduct
a systematic audit of YouTube's recommendation system using a hundred thousand
sock puppets to determine the presence of ideological bias (i.e., are
recommendations aligned with users' ideology), its magnitude (i.e., are users
recommended an increasing number of videos aligned with their ideology), and
radicalization (i.e., are the recommendations progressively more extreme).
Furthermore, we design and evaluate a bottom-up intervention to minimize
ideological bias in recommendations without relying on cooperation from
YouTube. We find that YouTube's recommendations do direct users -- especially
right-leaning users -- to ideologically biased and increasingly radical content
on both homepages and in up-next recommendations. Our intervention effectively
mitigates the observed bias, leading to more recommendations to ideologically
neutral, diverse, and dissimilar content, yet debiasing is especially
challenging for right-leaning users. Our systematic assessment shows that while
YouTube recommendations lead to ideological bias, such bias can be mitigated
through our intervention.
Related papers
- Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Echo Chambers in the Age of Algorithms: An Audit of Twitter's Friend Recommender System [2.8186456204337746]
We conduct an algorithmic audit of Twitter's Who-To-Follow friend recommendation system.
We create automated Twitter accounts that initially follow left and right affiliated U.S. politicians during the 2022 U.S. midterm elections.
We find that while following the recommendation algorithm leads accounts into dense and reciprocal neighborhoods that structurally resemble echo chambers, the recommender also results in less political homogeneity of a user's network.
arXiv Detail & Related papers (2024-04-09T16:12:22Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Subscriptions and external links help drive resentful users to
alternative and extremist YouTube videos [7.945705756085774]
We show that exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment.
Our findings suggest YouTube's algorithms were not sending people down "rabbit holes" during our observation window in 2020.
However, the platform continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.
arXiv Detail & Related papers (2022-04-22T20:22:06Z) - Movie Recommender System using critic consensus [0.0]
We propose a hybrid recommendation system based on the integration of collaborative and content-based content.
We would like to present a novel model that recommends movies based on the combination of user preferences and critical consensus scores.
arXiv Detail & Related papers (2021-12-22T13:04:41Z) - Auditing the Biases Enacted by YouTube for Political Topics in Germany [0.0]
We examine whether YouTube's recommendation system is enacting certain biases.
We find that YouTube is recommending increasingly popular but topically unrelated videos.
We discuss the strong popularity bias we identified and analyze the link between the popularity of content and emotions.
arXiv Detail & Related papers (2021-07-21T07:53:59Z) - Middle-Aged Video Consumers' Beliefs About Algorithmic Recommendations
on YouTube [2.8325478162326885]
We conduct semi-structured interviews with middle-aged YouTube video consumers to analyze user beliefs about the video recommendation system.
We identify four groups of user beliefs: Previous Actions, Social Media, Recommender System, and Company Policy.
We propose a framework to distinguish the four main actors that users believe influence their video recommendations.
arXiv Detail & Related papers (2020-08-07T14:35:50Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.