Characterizing COVID-19 Misinformation Communities Using a Novel Twitter
Dataset
- URL: http://arxiv.org/abs/2008.00791v4
- Date: Sat, 19 Sep 2020 07:11:39 GMT
- Title: Characterizing COVID-19 Misinformation Communities Using a Novel Twitter
Dataset
- Authors: Shahan Ali Memon and Kathleen M. Carley
- Abstract summary: We present a methodology and analyses to characterize the two competing COVID-19 misinformation communities online.
Our analyses show that COVID-19 misinformed communities are denser, and more organized than informed communities.
Our sociolinguistic analyses suggest that COVID-19 informed users tend to use more narratives than misinformed users.
- Score: 9.60966128833701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From conspiracy theories to fake cures and fake treatments, COVID-19 has
become a hot-bed for the spread of misinformation online. It is more important
than ever to identify methods to debunk and correct false information online.
In this paper, we present a methodology and analyses to characterize the two
competing COVID-19 misinformation communities online: (i) misinformed users or
users who are actively posting misinformation, and (ii) informed users or users
who are actively spreading true information, or calling out misinformation. The
goals of this study are two-fold: (i) collecting a diverse set of annotated
COVID-19 Twitter dataset that can be used by the research community to conduct
meaningful analysis; and (ii) characterizing the two target communities in
terms of their network structure, linguistic patterns, and their membership in
other communities. Our analyses show that COVID-19 misinformed communities are
denser, and more organized than informed communities, with a possibility of a
high volume of the misinformation being part of disinformation campaigns. Our
analyses also suggest that a large majority of misinformed users may be
anti-vaxxers. Finally, our sociolinguistic analyses suggest that COVID-19
informed users tend to use more narratives than misinformed users.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - AMIR: Automated MisInformation Rebuttal -- A COVID-19 Vaccination Datasets based Recommendation System [0.05461938536945722]
This work explored how existing information obtained from social media can be harnessed to facilitate automated rebuttal of misinformation at scale.
It leverages two publicly available datasets, FaCov (fact-checked articles) and misleading (social media Twitter) data on COVID-19 Vaccination.
arXiv Detail & Related papers (2023-10-29T13:07:33Z) - Understanding the Humans Behind Online Misinformation: An Observational
Study Through the Lens of the COVID-19 Pandemic [12.873747057824833]
We conduct a large-scale observational study analyzing over 32 million COVID-19 tweets and 16 million historical timeline tweets.
We focus on understanding the behavior and psychology of users disseminating misinformation during COVID-19 and its relationship with the historical inclinations towards sharing misinformation on Non-COVID domains before the pandemic.
arXiv Detail & Related papers (2023-10-12T16:42:53Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - "COVID-19 was a FIFA conspiracy #curropt": An Investigation into the
Viral Spread of COVID-19 Misinformation [60.268682953952506]
We estimate the extent to which misinformation has influenced the course of the COVID-19 pandemic using natural language processing models.
We provide a strategy to combat social media posts that are likely to cause widespread harm.
arXiv Detail & Related papers (2022-06-12T19:41:01Z) - Cross-lingual COVID-19 Fake News Detection [54.125563009333995]
We make the first attempt to detect COVID-19 misinformation in a low-resource language (Chinese) only using the fact-checked news in a high-resource language (English)
We propose a deep learning framework named CrossFake to jointly encode the cross-lingual news body texts and capture the news content.
Empirical results on our dataset demonstrate the effectiveness of CrossFake under the cross-lingual setting.
arXiv Detail & Related papers (2021-10-13T04:44:02Z) - Categorising Fine-to-Coarse Grained Misinformation: An Empirical Study
of COVID-19 Infodemic [6.137022734902771]
We introduce a fine-grained annotated misinformation tweets dataset including social behaviours annotation.
The dataset not only allows social behaviours analysis but also suitable for both evidence-based or non-evidence-based misinformation classification task.
arXiv Detail & Related papers (2021-06-22T12:17:53Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Social Media COVID-19 Misinformation Interventions Viewed Positively,
But Have Limited Impact [16.484676698355884]
Social media platforms like Facebook and Twitter rolled out design interventions, including banners linking to authoritative resources and more specific "false information" labels.
We found that most participants indicated a positive attitude towards interventions, particularly post-specific labels for misinformation.
Still, the majority of participants discovered or corrected misinformation through other means, most commonly web searches, suggesting room for platforms to do more to stem the spread of COVID-19 misinformation.
arXiv Detail & Related papers (2020-12-21T00:02:04Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z) - CoAID: COVID-19 Healthcare Misinformation Dataset [12.768221316730674]
CoAID includes 4,251 news, 296,000 related user engagements, 926 social platform posts about COVID-19, and ground truth labels.
CoAID includes 4,251 news, 296,000 related user engagements, 926 social platform posts about COVID-19, and ground truth labels.
arXiv Detail & Related papers (2020-05-22T19:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.