Auditing the Biases Enacted by YouTube for Political Topics in Germany
- URL: http://arxiv.org/abs/2107.09922v1
- Date: Wed, 21 Jul 2021 07:53:59 GMT
- Title: Auditing the Biases Enacted by YouTube for Political Topics in Germany
- Authors: Hendrik Heuer, Hendrik Hoch, Andreas Breiter, Yannis Theocharis
- Abstract summary: We examine whether YouTube's recommendation system is enacting certain biases.
We find that YouTube is recommending increasingly popular but topically unrelated videos.
We discuss the strong popularity bias we identified and analyze the link between the popularity of content and emotions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With YouTube's growing importance as a news platform, its recommendation
system came under increased scrutiny. Recognizing YouTube's recommendation
system as a broadcaster of media, we explore the applicability of laws that
require broadcasters to give important political, ideological, and social
groups adequate opportunity to express themselves in the broadcasted program of
the service. We present audits as an important tool to enforce such laws and to
ensure that a system operates in the public's interest. To examine whether
YouTube is enacting certain biases, we collected video recommendations about
political topics by following chains of ten recommendations per video. Our
findings suggest that YouTube's recommendation system is enacting important
biases. We find that YouTube is recommending increasingly popular but topically
unrelated videos. The sadness evoked by the recommended videos decreases while
the happiness increases. We discuss the strong popularity bias we identified
and analyze the link between the popularity of content and emotions. We also
discuss how audits empower researchers and civic hackers to monitor complex
machine learning (ML)-based systems like YouTube's recommendation system.
Related papers
- Cognitive Biases in Large Language Models for News Recommendation [68.90354828533535]
This paper explores the potential impact of cognitive biases on large language models (LLMs) based news recommender systems.
We discuss strategies to mitigate these biases through data augmentation, prompt engineering and learning algorithms aspects.
arXiv Detail & Related papers (2024-10-03T18:42:07Z) - Large Language Models as Recommender Systems: A Study of Popularity Bias [46.17953988777199]
Popular items are disproportionately recommended, overshadowing less popular but potentially relevant items.
Recent advancements have seen the integration of general-purpose Large Language Models into recommender systems.
Our study explores whether LLMs contribute to or can alleviate popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-03T12:53:37Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - Assessing enactment of content regulation policies: A post hoc
crowd-sourced audit of election misinformation on YouTube [9.023847175654602]
We conduct a 9-day crowd-sourced audit on YouTube to assess the extent of enactment of election misinformation policies.
We find that YouTube's search results contain more videos that oppose rather than support election misinformation.
However, watching misinformative election videos still lead users to a small number of misinformative videos in the up-next trails.
arXiv Detail & Related papers (2023-02-15T18:20:15Z) - Subscriptions and external links help drive resentful users to
alternative and extremist YouTube videos [7.945705756085774]
We show that exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment.
Our findings suggest YouTube's algorithms were not sending people down "rabbit holes" during our observation window in 2020.
However, the platform continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.
arXiv Detail & Related papers (2022-04-22T20:22:06Z) - YouTube, The Great Radicalizer? Auditing and Mitigating Ideological
Biases in YouTube Recommendations [20.145485714154933]
We conduct a systematic audit of YouTube's recommendation system using a hundred thousand sock puppets.
We find that YouTube's recommendations do direct users -- especially right-leaning users -- to ideologically biased and increasingly radical content.
Our intervention effectively mitigates the observed bias, leading to more recommendations to ideologically neutral, diverse, and dissimilar content.
arXiv Detail & Related papers (2022-03-20T22:45:56Z) - Examining the consumption of radical content on YouTube [1.2820564400223966]
Recently, YouTube's scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical anti-woke channels.
Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior.
We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right.
arXiv Detail & Related papers (2020-11-25T16:00:20Z) - Understanding YouTube Communities via Subscription-based Channel
Embeddings [0.0]
This paper presents new methods to discover and classify YouTube channels.
The methods use a self-supervised learning approach that leverages the public subscription pages of commenters.
We create a new dataset to analyze the amount of traffic going to different political content.
arXiv Detail & Related papers (2020-10-19T22:00:04Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z) - Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube
Thumbnails of Popular Videos [98.87558262467257]
This study explores culture preferences among countries using the thumbnails of YouTube trending videos.
Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube.
arXiv Detail & Related papers (2020-01-27T20:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.