Cross-Partisan Discussions on YouTube: Conservatives Talk to Liberals
but Liberals Don't Talk to Conservatives
- URL: http://arxiv.org/abs/2104.05365v1
- Date: Mon, 12 Apr 2021 11:32:37 GMT
- Title: Cross-Partisan Discussions on YouTube: Conservatives Talk to Liberals
but Liberals Don't Talk to Conservatives
- Authors: Siqi Wu, Paul Resnick
- Abstract summary: We find a surprising amount of cross-talk: most users with at least 10 comments posted at least once on both left-leaning and right-leaning YouTube channels.
Conservatives were much more likely to comment on left-leaning videos than liberals on right-leaning videos.
Cross-partisan replies were more toxic than co-partisan replies on both left-leaning and right-leaning videos.
- Score: 9.797488793708625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the first large-scale measurement study of cross-partisan
discussions between liberals and conservatives on YouTube, based on a dataset
of 274,241 political videos from 973 channels of US partisan media and 134M
comments from 9.3M users over eight months in 2020. Contrary to a simple
narrative of echo chambers, we find a surprising amount of cross-talk: most
users with at least 10 comments posted at least once on both left-leaning and
right-leaning YouTube channels. Cross-talk, however, was not symmetric. Based
on the user leaning predicted by a hierarchical attention model, we find that
conservatives were much more likely to comment on left-leaning videos than
liberals on right-leaning videos. Secondly, YouTube's comment sorting algorithm
made cross-partisan comments modestly less visible; for example, comments from
conservatives made up 26.3% of all comments on left-leaning videos but just
over 20% of the comments were in the top 20 positions. Lastly, using
Perspective API's toxicity score as a measure of quality, we find that
conservatives were not significantly more toxic than liberals when users
directly commented on the content of videos. However, when users replied to
comments from other users, we find that cross-partisan replies were more toxic
than co-partisan replies on both left-leaning and right-leaning videos, with
cross-partisan replies being especially toxic on the replier's home turf.
Related papers
- A Study of Partisan News Sharing in the Russian invasion of Ukraine [31.211851388657152]
Since the Russian invasion of Ukraine, a large volume of biased and partisan news has been spread via social media platforms.
We aim to characterize the role of such sharing in influencing users' communications.
We build a predictive model to identify users likely to spread partisan news.
arXiv Detail & Related papers (2023-11-26T13:25:11Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Demographic Confounding Causes Extreme Instances of Lifestyle Politics
on Facebook [73.37786708074361]
We find that the most extreme instances of lifestyle politics are those which are highly confounded by demographics such as race/ethnicity.
The most liberal interests included electric cars, Planned Parenthood, and liberal satire while the most conservative interests included the Republican Party and conservative commentators.
arXiv Detail & Related papers (2022-01-17T16:48:00Z) - Comparing the Language of QAnon-related content on Parler, Gab, and
Twitter [68.8204255655161]
Parler, a "free speech" platform popular with conservatives, was taken offline in January 2021 due to the lack of moderation of hateful and QAnon- and other conspiracy-related content.
We compare posts with the hashtag #QAnon on Parler over a month-long period with posts on Twitter and Gab.
Gab has the highest proportion of #QAnon posts with hate terms, and Parler and Twitter are similar in this respect.
On all three platforms, posts mentioning female political figures, Democrats, or Donald Trump have more anti-social language than posts mentioning male politicians, Republicans, or
arXiv Detail & Related papers (2021-11-22T11:19:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Examining the consumption of radical content on YouTube [1.2820564400223966]
Recently, YouTube's scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical anti-woke channels.
Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior.
We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right.
arXiv Detail & Related papers (2020-11-25T16:00:20Z) - Right and left, partisanship predicts (asymmetric) vulnerability to
misinformation [71.46564239895892]
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter.
We find that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
arXiv Detail & Related papers (2020-10-04T01:36:14Z) - Neutral bots probe political bias on social media [7.41821251168122]
We deploy neutral social bots who start following different news sources on Twitter to probe distinct biases emerging from platform mechanisms versus user interactions.
We find no strong or consistent evidence of political bias in the news feed.
The interactions of conservative accounts are skewed toward the right, whereas liberal accounts are exposed to moderate content shifting their experience toward the political center.
arXiv Detail & Related papers (2020-05-17T01:20:24Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.