"I Won the Election!": An Empirical Analysis of Soft Moderation
Interventions on Twitter
- URL: http://arxiv.org/abs/2101.07183v2
- Date: Tue, 13 Apr 2021 10:05:00 GMT
- Title: "I Won the Election!": An Empirical Analysis of Soft Moderation
Interventions on Twitter
- Authors: Savvas Zannettou
- Abstract summary: We study the users who share tweets with warning labels on Twitter and their political leaning.
We find that 72% of the tweets with warning labels are shared by Republicans, while only 11% are shared by Democrats.
- Score: 0.9391375268580806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past few years, there is a heated debate and serious public concerns
regarding online content moderation, censorship, and the principle of free
speech on the Web. To ease these concerns, social media platforms like Twitter
and Facebook refined their content moderation systems to support soft
moderation interventions. Soft moderation interventions refer to warning labels
attached to potentially questionable or harmful content to inform other users
about the content and its nature while the content remains accessible, hence
alleviating concerns related to censorship and free speech. In this work, we
perform one of the first empirical studies on soft moderation interventions on
Twitter. Using a mixed-methods approach, we study the users who share tweets
with warning labels on Twitter and their political leaning, the engagement that
these tweets receive, and how users interact with tweets that have warning
labels. Among other things, we find that 72% of the tweets with warning labels
are shared by Republicans, while only 11% are shared by Democrats. By analyzing
content engagement, we find that tweets with warning labels had more engagement
compared to tweets without warning labels. Also, we qualitatively analyze how
users interact with content that has warning labels finding that the most
popular interactions are related to further debunking false claims, mocking the
author or content of the disputed tweet, and further reinforcing or resharing
false claims. Finally, we describe concrete examples of inconsistencies, such
as warning labels that are incorrectly added or warning labels that are not
added on tweets despite sharing questionable and potentially harmful
information.
Related papers
- Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts [4.06613683722116]
This project examines how to enhance users' exposure to and engagement with verified and ideologically balanced news on Twitter.
We created 28 bots that replied to users tweeting about sports, entertainment, or lifestyle with a contextual reply.
To test differential effects by gender of the bots, treated users were randomly assigned to receive responses by bots presented as female or male.
We find that the treated users followed more news accounts and the users in the female bot treatment were more likely to like news content than the control.
arXiv Detail & Related papers (2024-03-20T07:44:06Z) - Russo-Ukrainian War: Prediction and explanation of Twitter suspension [47.61306219245444]
This study focuses on the Twitter suspension mechanism and the analysis of shared content and features of user accounts that may lead to this.
We have obtained a dataset containing 107.7M tweets, originating from 9.8 million users, using Twitter API.
Our results reveal scam campaigns taking advantage of trending topics regarding the Russia-Ukrainian conflict for Bitcoin fraud, spam, and advertisement campaigns.
arXiv Detail & Related papers (2023-06-06T08:41:02Z) - LAMBRETTA: Learning to Rank for Twitter Soft Moderation [11.319938541673578]
LAMBRETTA is a system that automatically identifies tweets that are candidates for soft moderation.
We run LAMBRETTA on Twitter data to moderate false claims related to the 2020 US Election.
It flags over 20 times more tweets than Twitter, with only 3.93% false positives and 18.81% false negatives.
arXiv Detail & Related papers (2022-12-12T14:41:46Z) - Predicting Hate Intensity of Twitter Conversation Threads [26.190359413890537]
We propose DRAGNET++, which aims to predict the intensity of hatred that a tweet can bring in through its reply chain in the future.
It uses the semantic and propagating structure of the tweet threads to maximize the contextual information leading up to and the fall of hate intensity at each subsequent tweet.
We show that DRAGNET++ outperforms all the state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-06-16T18:51:36Z) - Meaningful Context, a Red Flag, or Both? Users' Preferences for Enhanced
Misinformation Warnings on Twitter [6.748225062396441]
This study proposes user-tailored improvements in the soft moderation of misinformation on social media.
We ran an A/B evaluation with the Twitter's original warning tags in a 337 participant usability study.
The majority of the participants preferred the enhancements as a nudge toward recognizing and avoiding misinformation.
arXiv Detail & Related papers (2022-05-02T22:47:49Z) - Manipulating Twitter Through Deletions [64.33261764633504]
Research into influence campaigns on Twitter has mostly relied on identifying malicious activities from tweets obtained via public APIs.
Here, we provide the first exhaustive, large-scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts.
We find that a small fraction of accounts delete a large number of tweets daily.
First, limits on tweet volume are circumvented, allowing certain accounts to flood the network with over 26 thousand daily tweets.
Second, coordinated networks of accounts engage in repetitive likes and unlikes of content that is eventually deleted, which can manipulate ranking algorithms.
arXiv Detail & Related papers (2022-03-25T20:07:08Z) - Comparing the Language of QAnon-related content on Parler, Gab, and
Twitter [68.8204255655161]
Parler, a "free speech" platform popular with conservatives, was taken offline in January 2021 due to the lack of moderation of hateful and QAnon- and other conspiracy-related content.
We compare posts with the hashtag #QAnon on Parler over a month-long period with posts on Twitter and Gab.
Gab has the highest proportion of #QAnon posts with hate terms, and Parler and Twitter are similar in this respect.
On all three platforms, posts mentioning female political figures, Democrats, or Donald Trump have more anti-social language than posts mentioning male politicians, Republicans, or
arXiv Detail & Related papers (2021-11-22T11:19:15Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Understanding the Hoarding Behaviors during the COVID-19 Pandemic using
Large Scale Social Media Data [77.34726150561087]
We analyze the hoarding and anti-hoarding patterns of over 42,000 unique Twitter users in the United States from March 1 to April 30, 2020.
We find the percentage of females in both hoarding and anti-hoarding groups is higher than that of the general Twitter user population.
The LIWC anxiety mean for the hoarding-related tweets is significantly higher than the baseline Twitter anxiety mean.
arXiv Detail & Related papers (2020-10-15T16:02:25Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.