A Comparative Study on COVID-19 Fake News Detection Using Different
Transformer Based Models
- URL: http://arxiv.org/abs/2208.01355v1
- Date: Tue, 2 Aug 2022 10:50:16 GMT
- Title: A Comparative Study on COVID-19 Fake News Detection Using Different
Transformer Based Models
- Authors: Sajib Kumar Saha Joy, Dibyo Fabian Dofadar, Riyo Hayat Khan, Md.
Sabbir Ahmed, Rafeed Rahman
- Abstract summary: The rapid advancement of social networks and the convenience of internet availability have accelerated the rampant spread of false news and rumors on social media sites.
To limit the spread of such inaccuracies, identifying the fake news from online platforms could be the first and foremost step.
The RoBERTa model has performed better than other models by obtaining an F1 score of 0.98 in both real and fake classes.
- Score: 2.0649235321315285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of social networks and the convenience of internet
availability have accelerated the rampant spread of false news and rumors on
social media sites. Amid the COVID 19 epidemic, this misleading information has
aggravated the situation by putting peoples mental and physical lives in
danger. To limit the spread of such inaccuracies, identifying the fake news
from online platforms could be the first and foremost step. In this research,
the authors have conducted a comparative analysis by implementing five
transformer based models such as BERT, BERT without LSTM, ALBERT, RoBERTa, and
a Hybrid of BERT & ALBERT in order to detect the fraudulent news of COVID 19
from the internet. COVID 19 Fake News Dataset has been used for training and
testing the models. Among all these models, the RoBERTa model has performed
better than other models by obtaining an F1 score of 0.98 in both real and fake
classes.
Related papers
- How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models [95.44559524735308]
Large language or multimodal model based verification has been proposed to scale up online policing mechanisms for mitigating spread of false and harmful content.
We test the limits of improving foundation model performance without continual updating through an initial study of knowledge transfer.
Our results on two recent multi-modal fact-checking benchmarks, Mocheg and Fakeddit, indicate that knowledge transfer strategies can improve Fakeddit performance over the state-of-the-art by up to 1.7% and Mocheg performance by up to 2.9%.
arXiv Detail & Related papers (2024-06-29T08:39:07Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Performance Analysis of Transformer Based Models (BERT, ALBERT and
RoBERTa) in Fake News Detection [0.0]
Top three areas most exposed to hoaxes and misinformation by residents are in Banten, DKI Jakarta and West Java.
Previous study indicates a superior performance of a transformer model known as BERT over and above non transformer approach.
In this research, we explore those transformer models and found that ALBERT outperformed other models with 87.6% accuracy, 86.9% precision, 86.9% F1-score, and 174.5 run-time (s/epoch) respectively.
arXiv Detail & Related papers (2023-08-09T13:33:27Z) - Transformer-based approaches to Sentiment Detection [55.41644538483948]
We examined the performance of four different types of state-of-the-art transformer models for text classification.
The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions.
arXiv Detail & Related papers (2023-03-13T17:12:03Z) - Detecting COVID-19 Conspiracy Theories with Transformers and TF-IDF [2.3202611780303553]
We present our methods and results for three fake news detection tasks at MediaEval benchmark 2021.
We find that a pre-trained transformer yields the best validation results, but a randomly trained transformer with smart design can also be trained to reach accuracies close to that of the pre-trained transformer.
arXiv Detail & Related papers (2022-05-01T01:48:48Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - COVID-19 Fake News Detection Using Bidirectional Encoder Representations
from Transformers Based Models [16.400631119118636]
COVID-19 fake news detection has become a novel and important task in the NLP field.
In this paper, we fine tune the pre-trained Bidirectional Representations from Transformers (BERT) model as our base model.
We add BiLSTM layers and CNN layers on the top of the finetuned BERT model with frozen parameters or not frozen parameters methods respectively.
arXiv Detail & Related papers (2021-09-30T02:50:05Z) - Transformer based Automatic COVID-19 Fake News Detection System [9.23545668304066]
Misinformation is especially prevalent in the ongoing coronavirus disease (COVID-19) pandemic.
We report a methodology to analyze the reliability of information shared on social media pertaining to the COVID-19 pandemic.
Our system obtained 0.9855 f1-score on testset and ranked 5th among 160 teams.
arXiv Detail & Related papers (2021-01-01T06:49:27Z) - Two Stage Transformer Model for COVID-19 Fake News Detection and Fact
Checking [0.3441021278275805]
We develop a two stage automated pipeline for COVID-19 fake news detection using state of the art machine learning models for natural language processing.
The first model leverages a novel fact checking algorithm that retrieves the most relevant facts concerning user claims about particular COVID-19 claims.
The second model verifies the level of truth in the claim by computing the textual entailment between the claim and the true facts retrieved from a manually curated COVID-19 dataset.
arXiv Detail & Related papers (2020-11-26T11:50:45Z) - DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference [69.93692147242284]
Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications.
We propose a simple but effective method, DeeBERT, to accelerate BERT inference.
Experiments show that DeeBERT is able to save up to 40% inference time with minimal degradation in model quality.
arXiv Detail & Related papers (2020-04-27T17:58:05Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.