A Longitudinal Analysis of YouTube's Promotion of Conspiracy Videos
- URL: http://arxiv.org/abs/2003.03318v1
- Date: Fri, 6 Mar 2020 17:31:30 GMT
- Title: A Longitudinal Analysis of YouTube's Promotion of Conspiracy Videos
- Authors: Marc Faddoul, Guillaume Chaslot and Hany Farid
- Abstract summary: Conspiracy theories have flourished on social media, raising concerns that such content is fueling the spread of disinformation, supporting extremist ideologies, and in some cases, leading to violence.
Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized.
We have developed a classifier for automatically determining if a video is conspiratorial.
- Score: 14.867862489411868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conspiracy theories have flourished on social media, raising concerns that
such content is fueling the spread of disinformation, supporting extremist
ideologies, and in some cases, leading to violence. Under increased scrutiny
and pressure from legislators and the public, YouTube announced efforts to
change their recommendation algorithms so that the most egregious conspiracy
videos are demoted and demonetized. To verify this claim, we have developed a
classifier for automatically determining if a video is conspiratorial (e.g.,
the moon landing was faked, the pyramids of Giza were built by aliens, end of
the world prophecies, etc.). We coupled this classifier with an emulation of
YouTube's watch-next algorithm on more than a thousand popular informational
channels to obtain a year-long picture of the videos actively promoted by
YouTube. We also obtained trends of the so-called filter-bubble effect for
conspiracy theories.
Related papers
- The Conspiracy Money Machine: Uncovering Telegram's Conspiracy Channels and their Profit Model [50.80312055220701]
We discover that conspiracy channels can be clustered into four distinct communities comprising over 17,000 channels.
We find conspiracy theorists leverage e-commerce platforms to sell questionable products or lucratively promote them through affiliate links.
We conclude that this business involves hundreds of thousands of donors and generates a turnover of almost $66 million.
arXiv Detail & Related papers (2023-10-24T16:25:52Z) - YouNICon: YouTube's CommuNIty of Conspiracy Videos [7.135697290631831]
YOUNICON is a dataset with a large collection of videos from suspicious channels that were identified to contain conspiracy theories.
This paper seeks to develop a dataset, YOUNICON, to enable researchers to perform conspiracy theory detection as well as classification of videos with conspiracy theories into different topics.
arXiv Detail & Related papers (2023-04-11T15:20:51Z) - A Golden Age: Conspiracy Theories' Relationship with Misinformation
Outlets, News Media, and the Wider Internet [6.917588580148212]
We identify and publish a set of 755 different conspiracy theory websites dedicated to five conspiracy theories.
We find that each set often hyperlinks to the same external domains, with COVID and QAnon conspiracy theory websites having the largest amount of shared connections.
Examining the role of news media, we find that not only do outlets known for spreading misinformation hyperlink to our set of conspiracy theory websites more often than authentic news websites.
arXiv Detail & Related papers (2023-01-26T00:20:02Z) - Video Manipulations Beyond Faces: A Dataset with Human-Machine Analysis [60.13902294276283]
We present VideoSham, a dataset consisting of 826 videos (413 real and 413 manipulated).
Many of the existing deepfake datasets focus exclusively on two types of facial manipulations -- swapping with a different subject's face or altering the existing face.
Our analysis shows that state-of-the-art manipulation detection algorithms only work for a few specific attacks and do not scale well on VideoSham.
arXiv Detail & Related papers (2022-07-26T17:39:04Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - Conspiracy Brokers: Understanding the Monetization of YouTube Conspiracy
Theories [8.416017904031792]
We collect 184,218 ad impressions from 6,347 unique advertisers found on conspiracy-focused channels and mainstream YouTube content.
In comparison with mainstream content, conspiracy videos had similar levels of ads from well-known brands, but an almost eleven times higher prevalence of likely predatory or deceptive ads.
arXiv Detail & Related papers (2022-05-31T16:42:52Z) - Where the Earth is flat and 9/11 is an inside job: A comparative
algorithm audit of conspiratorial information in web search results [62.997667081978825]
We examine the distribution of conspiratorial information in search results across five search engines: Google, Bing, DuckDuckGo, Yahoo and Yandex.
We find that all search engines except Google consistently displayed conspiracy-promoting results and returned links to conspiracy-dedicated websites in their top results.
Most conspiracy-promoting results came from social media and conspiracy-dedicated websites while conspiracy-debunking information was shared by scientific websites and, to a lesser extent, legacy media.
arXiv Detail & Related papers (2021-12-02T14:29:21Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - The Truth is Out There: Investigating Conspiracy Theories in Text
Generation [66.01545519772527]
We investigate the propensity for language models to generate conspiracy theory text.
Our study focuses on testing these models for the elicitation of conspiracy theories.
We introduce a new dataset consisting of conspiracy theory topics, machine-generated conspiracy theories, and human-written conspiracy theories.
arXiv Detail & Related papers (2021-01-02T05:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.