Subscriptions and external links help drive resentful users to
alternative and extremist YouTube videos
- URL: http://arxiv.org/abs/2204.10921v2
- Date: Sun, 2 Apr 2023 22:01:22 GMT
- Title: Subscriptions and external links help drive resentful users to
alternative and extremist YouTube videos
- Authors: Annie Y. Chen, Brendan Nyhan, Jason Reifler, Ronald E. Robertson,
Christo Wilson
- Abstract summary: We show that exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment.
Our findings suggest YouTube's algorithms were not sending people down "rabbit holes" during our observation window in 2020.
However, the platform continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.
- Score: 7.945705756085774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Do online platforms facilitate the consumption of potentially harmful
content? Using paired behavioral and survey data provided by participants
recruited from a representative sample in 2020 (n=1,181), we show that exposure
to alternative and extremist channel videos on YouTube is heavily concentrated
among a small group of people with high prior levels of gender and racial
resentment. These viewers often subscribe to these channels (prompting
recommendations to their videos) and follow external links to them. In
contrast, non-subscribers rarely see or follow recommendations to videos from
these channels. Our findings suggest YouTube's algorithms were not sending
people down "rabbit holes" during our observation window in 2020, possibly due
to changes that the company made to its recommender system in 2019. However,
the platform continues to play a key role in facilitating exposure to content
from alternative and extremist channels among dedicated audiences.
Related papers
- The Conspiracy Money Machine: Uncovering Telegram's Conspiracy Channels and their Profit Model [50.80312055220701]
We discover that conspiracy channels can be clustered into four distinct communities comprising over 17,000 channels.
We find conspiracy theorists leverage e-commerce platforms to sell questionable products or lucratively promote them through affiliate links.
We conclude that this business involves hundreds of thousands of donors and generates a turnover of almost $66 million.
arXiv Detail & Related papers (2023-10-24T16:25:52Z) - How to Train Your YouTube Recommender to Avoid Unwanted Videos [51.6864681332515]
"Not interested" and "Don't recommend channel" buttons allow users to indicate disinterest when presented with unwanted recommendations.
We simulated YouTube users with sock puppet agents.
We found that the "Not interested" button worked best, significantly reducing such recommendations in all topics tested.
arXiv Detail & Related papers (2023-07-27T00:21:29Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Examining the Production of Co-active Channels on YouTube and BitChute [0.0]
This study explores differences in video production across 27 co-active channels on YouTube and BitChute.
We find that the majority of channels use significantly more moral and political words in their video titles on BitChute than in their video titles on YouTube.
In some cases, we find that channels produce videos on different sets of topics across the platforms, often producing content on BitChute that would likely be moderated on YouTube.
arXiv Detail & Related papers (2023-03-14T12:51:46Z) - YouTubers Not madeForKids: Detecting Channels Sharing Inappropriate
Videos Targeting Children [3.936965297430477]
We study YouTube channels found to post suitable or disturbing videos targeting kids in the past.
We identify a clear discrepancy between what YouTube assumes and flags as inappropriate content and channel, vs. what is found to be disturbing content and still available on the platform.
arXiv Detail & Related papers (2022-05-27T10:34:15Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - Auditing the Biases Enacted by YouTube for Political Topics in Germany [0.0]
We examine whether YouTube's recommendation system is enacting certain biases.
We find that YouTube is recommending increasingly popular but topically unrelated videos.
We discuss the strong popularity bias we identified and analyze the link between the popularity of content and emotions.
arXiv Detail & Related papers (2021-07-21T07:53:59Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - Examining the consumption of radical content on YouTube [1.2820564400223966]
Recently, YouTube's scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical anti-woke channels.
Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior.
We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right.
arXiv Detail & Related papers (2020-11-25T16:00:20Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.