Categorising Fine-to-Coarse Grained Misinformation: An Empirical Study
of COVID-19 Infodemic
- URL: http://arxiv.org/abs/2106.11702v2
- Date: Wed, 23 Jun 2021 14:24:37 GMT
- Title: Categorising Fine-to-Coarse Grained Misinformation: An Empirical Study
of COVID-19 Infodemic
- Authors: Ye Jiang, Xingyi Song, Carolina Scarton, Ahmet Aker, Kalina Bontcheva
- Abstract summary: We introduce a fine-grained annotated misinformation tweets dataset including social behaviours annotation.
The dataset not only allows social behaviours analysis but also suitable for both evidence-based or non-evidence-based misinformation classification task.
- Score: 6.137022734902771
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The spreading COVID-19 misinformation over social media already draws the
attention of many researchers. According to Google Scholar, about 26000
COVID-19 related misinformation studies have been published to date. Most of
these studies focusing on 1) detect and/or 2) analysing the characteristics of
COVID-19 related misinformation. However, the study of the social behaviours
related to misinformation is often neglected. In this paper, we introduce a
fine-grained annotated misinformation tweets dataset including social
behaviours annotation (e.g. comment or question to the misinformation). The
dataset not only allows social behaviours analysis but also suitable for both
evidence-based or non-evidence-based misinformation classification task. In
addition, we introduce leave claim out validation in our experiments and
demonstrate the misinformation classification performance could be
significantly different when applying to real-world unseen misinformation.
Related papers
- Smoke and Mirrors in Causal Downstream Tasks [59.90654397037007]
This paper looks at the causal inference task of treatment effect estimation, where the outcome of interest is recorded in high-dimensional observations.
We compare 6 480 models fine-tuned from state-of-the-art visual backbones, and find that the sampling and modeling choices significantly affect the accuracy of the causal estimate.
Our results suggest that future benchmarks should carefully consider real downstream scientific questions, especially causal ones.
arXiv Detail & Related papers (2024-05-27T13:26:34Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - AMIR: Automated MisInformation Rebuttal -- A COVID-19 Vaccination Datasets based Recommendation System [0.05461938536945722]
This work explored how existing information obtained from social media can be harnessed to facilitate automated rebuttal of misinformation at scale.
It leverages two publicly available datasets, FaCov (fact-checked articles) and misleading (social media Twitter) data on COVID-19 Vaccination.
arXiv Detail & Related papers (2023-10-29T13:07:33Z) - A Large-Scale Comparative Study of Accurate COVID-19 Information versus
Misinformation [4.926199465135915]
The COVID-19 pandemic led to an infodemic where an overwhelming amount of COVID-19 related content was being disseminated at high velocity through social media.
This motivated us to carry out a comparative study of the characteristics of COVID-19 misinformation versus those of accurate COVID-19 information through a large-scale computational analysis of over 242 million tweets.
An added contribution of this study is the creation of a COVID-19 misinformation classification dataset.
arXiv Detail & Related papers (2023-04-10T18:44:41Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - "COVID-19 was a FIFA conspiracy #curropt": An Investigation into the
Viral Spread of COVID-19 Misinformation [60.268682953952506]
We estimate the extent to which misinformation has influenced the course of the COVID-19 pandemic using natural language processing models.
We provide a strategy to combat social media posts that are likely to cause widespread harm.
arXiv Detail & Related papers (2022-06-12T19:41:01Z) - Testing the Generalization of Neural Language Models for COVID-19
Misinformation Detection [6.1204874238049705]
A drastic rise in potentially life-threatening misinformation has been a by-product of the COVID-19 pandemic.
We evaluate fifteen Transformer-based models on five COVID-19 misinformation datasets.
We show tokenizers and models tailored to COVID-19 data do not provide a significant advantage over general-purpose ones.
arXiv Detail & Related papers (2021-11-15T15:01:55Z) - Visual Selective Attention System to Intervene User Attention in Sharing
COVID-19 Misinformation [2.7393821783237184]
This study aims to intervene in the user's attention with a visual selective attention approach.
The results are expected to be the basis for developing social media applications to combat the negative impact of the infodemic COVID-19 misinformation.
arXiv Detail & Related papers (2021-10-26T08:41:03Z) - Case Study on Detecting COVID-19 Health-Related Misinformation in Social
Media [7.194177427819438]
This paper presents a mechanism to detect COVID-19 health-related misinformation in social media.
We defined misinformation themes and associated keywords incorporated into the misinformation detection mechanism using applied machine learning techniques.
Our method shows promising results with at most 78% accuracy in classifying health-related misinformation versus true information.
arXiv Detail & Related papers (2021-06-12T16:26:04Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z) - A Study of Knowledge Sharing related to Covid-19 Pandemic in Stack
Overflow [69.5231754305538]
Study of 464 Stack Overflow questions posted mainly in February and March 2020 and leveraging the power of text mining.
Findings reveal that indeed this global crisis sparked off an intense and increasing activity in Stack Overflow with most post topics reflecting a strong interest on the analysis of Covid-19 data.
arXiv Detail & Related papers (2020-04-18T08:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.