COVID-19 Fake News Detection Using Bidirectional Encoder Representations
from Transformers Based Models
- URL: http://arxiv.org/abs/2109.14816v2
- Date: Fri, 1 Oct 2021 15:45:54 GMT
- Title: COVID-19 Fake News Detection Using Bidirectional Encoder Representations
from Transformers Based Models
- Authors: Yuxiang Wang, Yongheng Zhang, Xuebo Li, Xinyao Yu
- Abstract summary: COVID-19 fake news detection has become a novel and important task in the NLP field.
In this paper, we fine tune the pre-trained Bidirectional Representations from Transformers (BERT) model as our base model.
We add BiLSTM layers and CNN layers on the top of the finetuned BERT model with frozen parameters or not frozen parameters methods respectively.
- Score: 16.400631119118636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, the development of social media allows people to access the latest
news easily. During the COVID-19 pandemic, it is important for people to access
the news so that they can take corresponding protective measures. However, the
fake news is flooding and is a serious issue especially under the global
pandemic. The misleading fake news can cause significant loss in terms of the
individuals and the society. COVID-19 fake news detection has become a novel
and important task in the NLP field. However, fake news always contain the
correct portion and the incorrect portion. This fact increases the difficulty
of the classification task. In this paper, we fine tune the pre-trained
Bidirectional Encoder Representations from Transformers (BERT) model as our
base model. We add BiLSTM layers and CNN layers on the top of the finetuned
BERT model with frozen parameters or not frozen parameters methods
respectively. The model performance evaluation results showcase that our best
model (BERT finetuned model with frozen parameters plus BiLSTM layers) achieves
state-of-the-art results towards COVID-19 fake news detection task. We also
explore keywords evaluation methods using our best model and evaluate the model
performance after removing keywords.
Related papers
- A Semi-supervised Fake News Detection using Sentiment Encoding and LSTM with Self-Attention [0.0]
We propose a semi-supervised self-learning method in which a sentiment analysis is acquired by some state-of-the-art pretrained models.
Our learning model is trained in a semi-supervised fashion and incorporates LSTM with self-attention layers.
We benchmark our model on a dataset with 20,000 news content along with their feedback, which shows better performance in precision, recall, and measures compared to competitive methods in fake news detection.
arXiv Detail & Related papers (2024-07-27T20:00:10Z) - MoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery Detection [54.545054873239295]
Deepfakes have recently raised significant trust issues and security concerns among the public.
ViT-based methods take advantage of the expressivity of transformers, achieving superior detection performance.
This work introduces Mixture-of-Experts modules for Face Forgery Detection (MoE-FFD), a generalized yet parameter-efficient ViT-based approach.
arXiv Detail & Related papers (2024-04-12T13:02:08Z) - Performance Analysis of Transformer Based Models (BERT, ALBERT and
RoBERTa) in Fake News Detection [0.0]
Top three areas most exposed to hoaxes and misinformation by residents are in Banten, DKI Jakarta and West Java.
Previous study indicates a superior performance of a transformer model known as BERT over and above non transformer approach.
In this research, we explore those transformer models and found that ALBERT outperformed other models with 87.6% accuracy, 86.9% precision, 86.9% F1-score, and 174.5 run-time (s/epoch) respectively.
arXiv Detail & Related papers (2023-08-09T13:33:27Z) - A Simple yet Effective Self-Debiasing Framework for Transformer Models [49.09053367249642]
Current Transformer-based natural language understanding (NLU) models heavily rely on dataset biases.
We propose a simple yet effective self-debiasing framework for Transformer-based NLU models.
arXiv Detail & Related papers (2023-06-02T20:31:58Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Transformer-based approaches to Sentiment Detection [55.41644538483948]
We examined the performance of four different types of state-of-the-art transformer models for text classification.
The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions.
arXiv Detail & Related papers (2023-03-13T17:12:03Z) - A Comparative Study on COVID-19 Fake News Detection Using Different
Transformer Based Models [2.0649235321315285]
The rapid advancement of social networks and the convenience of internet availability have accelerated the rampant spread of false news and rumors on social media sites.
To limit the spread of such inaccuracies, identifying the fake news from online platforms could be the first and foremost step.
The RoBERTa model has performed better than other models by obtaining an F1 score of 0.98 in both real and fake classes.
arXiv Detail & Related papers (2022-08-02T10:50:16Z) - Transformer-based Language Model Fine-tuning Methods for COVID-19 Fake
News Detection [7.29381091750894]
We propose a novel transformer-based language model fine-tuning approach for these fake news detection.
First, the token vocabulary of individual model is expanded for the actual semantics of professional phrases.
Last, the predicted features extracted by universal language model RoBERTa and domain-specific model CT-BERT are fused by one multiple layer perception to integrate fine-grained and high-level specific representations.
arXiv Detail & Related papers (2021-01-14T09:05:42Z) - Two Stage Transformer Model for COVID-19 Fake News Detection and Fact
Checking [0.3441021278275805]
We develop a two stage automated pipeline for COVID-19 fake news detection using state of the art machine learning models for natural language processing.
The first model leverages a novel fact checking algorithm that retrieves the most relevant facts concerning user claims about particular COVID-19 claims.
The second model verifies the level of truth in the claim by computing the textual entailment between the claim and the true facts retrieved from a manually curated COVID-19 dataset.
arXiv Detail & Related papers (2020-11-26T11:50:45Z) - Exploring Deep Hybrid Tensor-to-Vector Network Architectures for
Regression Based Speech Enhancement [53.47564132861866]
We find that a hybrid architecture, namely CNN-TT, is capable of maintaining a good quality performance with a reduced model parameter size.
CNN-TT is composed of several convolutional layers at the bottom for feature extraction to improve speech quality.
arXiv Detail & Related papers (2020-07-25T22:21:05Z) - DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference [69.93692147242284]
Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications.
We propose a simple but effective method, DeeBERT, to accelerate BERT inference.
Experiments show that DeeBERT is able to save up to 40% inference time with minimal degradation in model quality.
arXiv Detail & Related papers (2020-04-27T17:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.