Eating Garlic Prevents COVID-19 Infection: Detecting Misinformation on
the Arabic Content of Twitter
- URL: http://arxiv.org/abs/2101.05626v1
- Date: Sat, 9 Jan 2021 22:52:21 GMT
- Title: Eating Garlic Prevents COVID-19 Infection: Detecting Misinformation on
the Arabic Content of Twitter
- Authors: Sarah Alqurashi, Btool Hamoui, Abdulaziz Alashaikh, Ahmad Alhindi,
Eisa Alanazi
- Abstract summary: We construct a large Arabic dataset related to COVID-19 misinformation and gold-annotate the tweets into two categories: misinformation or not.
We apply eight different traditional and deep machine learning models, with different features including word embeddings and word frequency.
Experiments show that optimizing the area under the curve (AUC) improves the models' performance and the Extreme Gradient Boosting (XGBoost) presents the highest accuracy in detecting COVID-19 misinformation online.
- Score: 0.23624125155742054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of social media content during the current pandemic provides
useful tools for disseminating information which has also become a root for
misinformation. Therefore, there is an urgent need for fact-checking and
effective techniques for detecting misinformation in social media. In this
work, we study the misinformation in the Arabic content of Twitter. We
construct a large Arabic dataset related to COVID-19 misinformation and
gold-annotate the tweets into two categories: misinformation or not. Then, we
apply eight different traditional and deep machine learning models, with
different features including word embeddings and word frequency. The word
embedding models (\textsc{FastText} and word2vec) exploit more than two million
Arabic tweets related to COVID-19. Experiments show that optimizing the area
under the curve (AUC) improves the models' performance and the Extreme Gradient
Boosting (XGBoost) presents the highest accuracy in detecting COVID-19
misinformation online.
Related papers
- Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - Machine Learning-based Automatic Annotation and Detection of COVID-19
Fake News [8.020736472947581]
COVID-19 impacted every part of the world, although the misinformation about the outbreak traveled faster than the virus.
Existing work neglects the presence of bots that act as a catalyst in the spread.
We propose an automated approach for labeling data using verified fact-checked statements on a Twitter dataset.
arXiv Detail & Related papers (2022-09-07T13:55:59Z) - Two-Stage Classifier for COVID-19 Misinformation Detection Using BERT: a
Study on Indonesian Tweets [0.15229257192293202]
Research on COVID-19 misinformation detection in Indonesia is still scarce.
In this study, we propose the two-stage classifier model using IndoBERT pre-trained language model for the Tweet misinformation detection task.
The experimental results show that the combination of the BERT sequence classifier for relevance prediction and Bi-LSTM for misinformation detection outperformed other machine learning models with an accuracy of 87.02%.
arXiv Detail & Related papers (2022-06-30T15:33:20Z) - Twitter-COMMs: Detecting Climate, COVID, and Military Multimodal
Misinformation [83.2079454464572]
This paper describes our approach to the Image-Text Inconsistency Detection challenge of the DARPA Semantic Forensics (SemaFor) Program.
We collect Twitter-COMMs, a large-scale multimodal dataset with 884k tweets relevant to the topics of Climate Change, COVID-19, and Military Vehicles.
We train our approach, based on the state-of-the-art CLIP model, leveraging automatically generated random and hard negatives.
arXiv Detail & Related papers (2021-12-16T03:37:20Z) - Cross-lingual COVID-19 Fake News Detection [54.125563009333995]
We make the first attempt to detect COVID-19 misinformation in a low-resource language (Chinese) only using the fact-checked news in a high-resource language (English)
We propose a deep learning framework named CrossFake to jointly encode the cross-lingual news body texts and capture the news content.
Empirical results on our dataset demonstrate the effectiveness of CrossFake under the cross-lingual setting.
arXiv Detail & Related papers (2021-10-13T04:44:02Z) - VidLanKD: Improving Language Understanding via Video-Distilled Knowledge
Transfer [76.3906723777229]
We present VidLanKD, a video-language knowledge distillation method for improving language understanding.
We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.
In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models.
arXiv Detail & Related papers (2021-07-06T15:41:32Z) - AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News and Hate Speech
Detection Dataset [0.0]
"AraCOVID19-MFH" is a manually annotated multi-label Arabic COVID-19 fake news and hate speech detection dataset.
Our dataset contains 10,828 Arabic tweets annotated with 10 different labels.
It can also be used for hate speech detection, opinion/news classification, dialect identification, and many other tasks.
arXiv Detail & Related papers (2021-05-07T09:52:44Z) - ArCorona: Analyzing Arabic Tweets in the Early Days of Coronavirus
(COVID-19) Pandemic [3.057212947792573]
We present the largest manually annotated dataset of Arabic tweets related to COVID-19.
We describe annotation guidelines, analyze our dataset and build effective machine learning and transformer based models for classification.
arXiv Detail & Related papers (2020-12-02T19:05:25Z) - ArCOV19-Rumors: Arabic COVID-19 Twitter Dataset for Misinformation
Detection [6.688963029270579]
ArCOV19-Rumors is an Arabic COVID-19 Twitter dataset for misinformation detection composed of tweets containing claims from 27th January till the end of April 2020.
We collected 138 verified claims, mostly from popular fact-checking websites, and identified 9.4K relevant tweets to those claims.
Tweets were manually-annotated by veracity to support research on misinformation detection, which is one of the major problems faced during a pandemic.
arXiv Detail & Related papers (2020-10-17T11:21:40Z) - Trawling for Trolling: A Dataset [56.1778095945542]
We present a dataset that models trolling as a subcategory of offensive content.
The dataset has 12,490 samples, split across 5 classes; Normal, Profanity, Trolling, Derogatory and Hate Speech.
arXiv Detail & Related papers (2020-08-02T17:23:55Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.