Community-Based Fact-Checking on Twitter's Birdwatch Platform
- URL: http://arxiv.org/abs/2104.07175v3
- Date: Tue, 14 Dec 2021 04:37:59 GMT
- Title: Community-Based Fact-Checking on Twitter's Birdwatch Platform
- Authors: Nicolas Pr\"ollochs
- Abstract summary: Twitter introduced "Birdwatch," a community-driven approach to address misinformation on Twitter.
On Birdwatch, users can identify tweets they believe are misleading, write notes that provide context to the tweet and rate the quality of other users' notes.
We collect Birdwatch notes and ratings between the introduction of the feature in early 2021 and end of July 2021.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misinformation undermines the credibility of social media and poses
significant threats to modern societies. As a countermeasure, Twitter has
recently introduced "Birdwatch," a community-driven approach to address
misinformation on Twitter. On Birdwatch, users can identify tweets they believe
are misleading, write notes that provide context to the tweet and rate the
quality of other users' notes. In this work, we empirically analyze how users
interact with this new feature. For this purpose, we collect {all} Birdwatch
notes and ratings between the introduction of the feature in early 2021 and end
of July 2021. We then map each Birdwatch note to the fact-checked tweet using
Twitter's historical API. In addition, we use text mining methods to extract
content characteristics from the text explanations in the Birdwatch notes
(e.g., sentiment). Our empirical analysis yields the following main findings:
(i) users more frequently file Birdwatch notes for misleading than not
misleading tweets. These misleading tweets are primarily reported because of
factual errors, lack of important context, or because they treat unverified
claims as facts. (ii) Birdwatch notes are more helpful to other users if they
link to trustworthy sources and if they embed a more positive sentiment. (iii)
The social influence of the author of the source tweet is associated with
differences in the level of user consensus. For influential users with many
followers, Birdwatch notes yield a lower level of consensus among users and
community-created fact checks are more likely to be seen as being incorrect and
argumentative. Altogether, our findings can help social media platforms to
formulate guidelines for users on how to write more helpful fact checks. At the
same time, our analysis suggests that community-based fact-checking faces
challenges regarding opinion speculation and polarization among the user base.
Related papers
- Can Community Notes Replace Professional Fact-Checkers? [49.5332225129956]
Policy changes by Twitter/X and Meta signal a shift away from partnerships with fact-checking organisations.
Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported.
arXiv Detail & Related papers (2025-02-19T22:26:39Z) - Who Checks the Checkers? Exploring Source Credibility in Twitter's Community Notes [0.03511246202322249]
The proliferation of misinformation on social media platforms has become a significant concern.
This study focuses on the specific feature of Twitter Community Notes, despite its potential role in crowd-sourced fact-checking.
We find that the majority of cited sources are news outlets that are left-leaning and are of high factuality, pointing to a potential bias in the platform's community fact-checking.
arXiv Detail & Related papers (2024-06-18T09:47:58Z) - ViralBERT: A User Focused BERT-Based Approach to Virality Prediction [11.992815669875924]
We propose ViralBERT, which can be used to predict the virality of tweets using content- and user-based features.
We employ a method of concatenating numerical features such as hashtags and follower numbers to tweet text, and utilise two BERT modules.
We collect a dataset of 330k tweets to train ViralBERT and validate the efficacy of our model using baselines from current studies in this field.
arXiv Detail & Related papers (2022-05-17T21:40:24Z) - Analyzing Behavioral Changes of Twitter Users After Exposure to
Misinformation [1.8251012479962594]
We aim to understand whether general Twitter users changed their behavior after being exposed to misinformation.
We compare the before and after behavior of exposed users to determine whether the frequency of the tweets they posted underwent any significant change.
We also study the characteristics of two specific user groups, multi-exposure and extreme change groups, which were potentially highly impacted.
arXiv Detail & Related papers (2021-11-01T04:48:07Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Understanding Information Spreading Mechanisms During COVID-19 Pandemic
by Analyzing the Impact of Tweet Text and User Features for Retweet
Prediction [6.658785818853953]
COVID-19 has affected the world economy and the daily life routine of almost everyone.
Social media platforms enable users to share information with other users who can reshare this information.
We propose two CNN and RNN based models and evaluate the performance of these models on a publicly available TweetsCOV19 dataset.
arXiv Detail & Related papers (2021-05-26T15:55:58Z) - Identity Signals in Emoji Do not Influence Perception of Factual Truth
on Twitter [90.14874935843544]
Prior work has shown that Twitter users use skin-toned emoji as an act of self-representation to express their racial/ethnic identity.
We test whether this signal of identity can influence readers' perceptions about the content of a post containing that signal.
We find that neither emoji nor profile photo has an effect on how readers rate these facts.
arXiv Detail & Related papers (2021-05-07T10:56:19Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Understanding the Hoarding Behaviors during the COVID-19 Pandemic using
Large Scale Social Media Data [77.34726150561087]
We analyze the hoarding and anti-hoarding patterns of over 42,000 unique Twitter users in the United States from March 1 to April 30, 2020.
We find the percentage of females in both hoarding and anti-hoarding groups is higher than that of the general Twitter user population.
The LIWC anxiety mean for the hoarding-related tweets is significantly higher than the baseline Twitter anxiety mean.
arXiv Detail & Related papers (2020-10-15T16:02:25Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.