The Role of the Crowd in Countering Misinformation: A Case Study of the
COVID-19 Infodemic
- URL: http://arxiv.org/abs/2011.05773v2
- Date: Thu, 12 Nov 2020 04:20:37 GMT
- Title: The Role of the Crowd in Countering Misinformation: A Case Study of the
COVID-19 Infodemic
- Authors: Nicholas Micallef, Bing He, Srijan Kumar, Mustaque Ahamad and Nasir
Memon
- Abstract summary: We focus on tweets related to the COVID-19 pandemic, analyzing the spread of misinformation, professional fact checks, and the crowd response to popular misleading claims about COVID-19.
We train a classifier to create a novel dataset of 155,468 COVID-19-related tweets, containing 33,237 false claims and 33,413 refuting arguments.
We observe that the surge in misinformation tweets results in a quick response and a corresponding increase in tweets that refute such misinformation.
- Score: 15.885290526721544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fact checking by professionals is viewed as a vital defense in the fight
against misinformation.While fact checking is important and its impact has been
significant, fact checks could have limited visibility and may not reach the
intended audience, such as those deeply embedded in polarized communities.
Concerned citizens (i.e., the crowd), who are users of the platforms where
misinformation appears, can play a crucial role in disseminating fact-checking
information and in countering the spread of misinformation. To explore if this
is the case, we conduct a data-driven study of misinformation on the Twitter
platform, focusing on tweets related to the COVID-19 pandemic, analyzing the
spread of misinformation, professional fact checks, and the crowd response to
popular misleading claims about COVID-19. In this work, we curate a dataset of
false claims and statements that seek to challenge or refute them. We train a
classifier to create a novel dataset of 155,468 COVID-19-related tweets,
containing 33,237 false claims and 33,413 refuting arguments.Our findings show
that professional fact-checking tweets have limited volume and reach. In
contrast, we observe that the surge in misinformation tweets results in a quick
response and a corresponding increase in tweets that refute such
misinformation. More importantly, we find contrasting differences in the way
the crowd refutes tweets, some tweets appear to be opinions, while others
contain concrete evidence, such as a link to a reputed source. Our work
provides insights into how misinformation is organically countered in social
platforms by some of their users and the role they play in amplifying
professional fact checks.These insights could lead to development of tools and
mechanisms that can empower concerned citizens in combating misinformation. The
code and data can be found in
http://claws.cc.gatech.edu/covid_counter_misinformation.html.
Related papers
- Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Cross-lingual COVID-19 Fake News Detection [54.125563009333995]
We make the first attempt to detect COVID-19 misinformation in a low-resource language (Chinese) only using the fact-checked news in a high-resource language (English)
We propose a deep learning framework named CrossFake to jointly encode the cross-lingual news body texts and capture the news content.
Empirical results on our dataset demonstrate the effectiveness of CrossFake under the cross-lingual setting.
arXiv Detail & Related papers (2021-10-13T04:44:02Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - The State of Infodemic on Twitter [0.0]
Social media posts and platforms are at risk of rumors and misinformation in the face of the serious uncertainty surrounding the virus itself.
We have presented an exploratory analysis of the tweets and the users who are involved in spreading misinformation.
We then delved into machine learning models and natural language processing techniques to identify if a tweet contains misinformation.
arXiv Detail & Related papers (2021-05-17T10:58:35Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Predicting Misinformation and Engagement in COVID-19 Twitter Discourse
in the First Months of the Outbreak [1.2059055685264957]
We examine nearly 505K COVID-19-related tweets from the initial months of the pandemic to understand misinformation as a function of bot-behavior and engagement.
We found that real users tweet both facts and misinformation, while bots tweet proportionally more misinformation.
arXiv Detail & Related papers (2020-12-03T18:47:34Z) - ArCOV19-Rumors: Arabic COVID-19 Twitter Dataset for Misinformation
Detection [6.688963029270579]
ArCOV19-Rumors is an Arabic COVID-19 Twitter dataset for misinformation detection composed of tweets containing claims from 27th January till the end of April 2020.
We collected 138 verified claims, mostly from popular fact-checking websites, and identified 9.4K relevant tweets to those claims.
Tweets were manually-annotated by veracity to support research on misinformation detection, which is one of the major problems faced during a pandemic.
arXiv Detail & Related papers (2020-10-17T11:21:40Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z) - An Exploratory Study of COVID-19 Misinformation on Twitter [5.070542698701158]
During the COVID-19 pandemic, social media has become a home ground for misinformation.
We have conducted an exploratory study into the propagation, authors and content of misinformation on Twitter around the topic of COVID-19.
arXiv Detail & Related papers (2020-05-12T12:07:35Z) - COVID-19 on Social Media: Analyzing Misinformation in Twitter
Conversations [22.43295864610142]
We collected streaming data related to COVID-19 using the Twitter API, starting March 1, 2020.
We identified unreliable and misleading contents based on fact-checking sources.
We examined the narratives promoted in misinformation tweets, along with the distribution of engagements with these tweets.
arXiv Detail & Related papers (2020-03-26T09:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.