What is BitChute? Characterizing the "Free Speech" Alternative to
YouTube
- URL: http://arxiv.org/abs/2004.01984v3
- Date: Fri, 29 May 2020 21:07:52 GMT
- Title: What is BitChute? Characterizing the "Free Speech" Alternative to
YouTube
- Authors: Milo Trujillo, Maur\'icio Gruppi, Cody Buntain, Benjamin D. Horne
- Abstract summary: We characterize the content and discourse on BitChute, a social video-hosting platform.
We find that BitChute has a higher rate of hate speech than Gab but less than 4chan.
While some BitChute content producers have been banned from other platforms, many maintain profiles on mainstream social media platforms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we characterize the content and discourse on BitChute, a
social video-hosting platform. Launched in 2017 as an alternative to YouTube,
BitChute joins an ecosystem of alternative, low content moderation platforms,
including Gab, Voat, Minds, and 4chan. Uniquely, BitChute is the first of these
alternative platforms to focus on video content and is growing in popularity.
Our analysis reveals several key characteristics of the platform. We find that
only a handful of channels receive any engagement, and almost all of those
channels contain conspiracies or hate speech. This high rate of hate speech on
the platform as a whole, much of which is anti-Semitic, is particularly
concerning. Our results suggest that BitChute has a higher rate of hate speech
than Gab but less than 4chan. Lastly, we find that while some BitChute content
producers have been banned from other platforms, many maintain profiles on
mainstream social media platforms, particularly YouTube. This paper contributes
a first look at the content and discourse on BitChute and provides a building
block for future research on low content moderation platforms.
Related papers
- The Conspiracy Money Machine: Uncovering Telegram's Conspiracy Channels and their Profit Model [50.80312055220701]
We discover that conspiracy channels can be clustered into four distinct communities comprising over 17,000 channels.
We find conspiracy theorists leverage e-commerce platforms to sell questionable products or lucratively promote them through affiliate links.
We conclude that this business involves hundreds of thousands of donors and generates a turnover of almost $66 million.
arXiv Detail & Related papers (2023-10-24T16:25:52Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - HateMM: A Multi-Modal Dataset for Hate Video Classification [8.758311170297192]
We build deep learning multi-modal models to classify the hate videos and observe that using all the modalities improves the overall hate speech detection performance.
Our work takes the first step toward understanding and modeling hateful videos on video hosting platforms such as BitChute.
arXiv Detail & Related papers (2023-05-06T03:39:00Z) - Hate Speech Targets Detection in Parler using BERT [0.0]
We present a pipeline for detecting hate speech and its targets and use it for creating Parler hate targets' distribution.
The pipeline consists of two models; one for hate speech detection and the second for target classification.
arXiv Detail & Related papers (2023-04-03T17:49:04Z) - Examining the Production of Co-active Channels on YouTube and BitChute [0.0]
This study explores differences in video production across 27 co-active channels on YouTube and BitChute.
We find that the majority of channels use significantly more moral and political words in their video titles on BitChute than in their video titles on YouTube.
In some cases, we find that channels produce videos on different sets of topics across the platforms, often producing content on BitChute that would likely be moderated on YouTube.
arXiv Detail & Related papers (2023-03-14T12:51:46Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - Comparing the Language of QAnon-related content on Parler, Gab, and
Twitter [68.8204255655161]
Parler, a "free speech" platform popular with conservatives, was taken offline in January 2021 due to the lack of moderation of hateful and QAnon- and other conspiracy-related content.
We compare posts with the hashtag #QAnon on Parler over a month-long period with posts on Twitter and Gab.
Gab has the highest proportion of #QAnon posts with hate terms, and Parler and Twitter are similar in this respect.
On all three platforms, posts mentioning female political figures, Democrats, or Donald Trump have more anti-social language than posts mentioning male politicians, Republicans, or
arXiv Detail & Related papers (2021-11-22T11:19:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Characterizing Abhorrent, Misinformative, and Mistargeted Content on
YouTube [1.9138099871648453]
We study the degree of problematic content on YouTube and the role of the recommendation algorithm in the dissemination of such content.
Our analysis reveals that young children are likely to encounter disturbing content when they randomly browse the platform.
We find that Incel activity is increasing over time and that platforms may play an active role in steering users towards extreme content.
arXiv Detail & Related papers (2021-05-20T15:10:48Z) - Examining the consumption of radical content on YouTube [1.2820564400223966]
Recently, YouTube's scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical anti-woke channels.
Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior.
We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right.
arXiv Detail & Related papers (2020-11-25T16:00:20Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.