A Proposed Bi-LSTM Method to Fake News Detection
- URL: http://arxiv.org/abs/2206.13982v1
- Date: Wed, 15 Jun 2022 06:36:42 GMT
- Title: A Proposed Bi-LSTM Method to Fake News Detection
- Authors: Taminul Islam, MD Alamin Hosen, Akhi Mony, MD Touhid Hasan, Israt
Jahan, Arindom Kundu
- Abstract summary: False news was a determining factor in influencing the outcome of the U.S. presidential election.
Bi-LSTM was applied to determine if the news is false or real.
After creating & running the model, the work achieved 84% model accuracy and 62.0 F1-macro scores with training data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen an explosion in social media usage, allowing people to
connect with others. Since the appearance of platforms such as Facebook and
Twitter, such platforms influence how we speak, think, and behave. This problem
negatively undermines confidence in content because of the existence of fake
news. For instance, false news was a determining factor in influencing the
outcome of the U.S. presidential election and other sites. Because this
information is so harmful, it is essential to make sure we have the necessary
tools to detect and resist it. We applied Bidirectional Long Short-Term Memory
(Bi-LSTM) to determine if the news is false or real in order to showcase this
study. A number of foreign websites and newspapers were used for data
collection. After creating & running the model, the work achieved 84% model
accuracy and 62.0 F1-macro scores with training data.
Related papers
- A Semi-supervised Fake News Detection using Sentiment Encoding and LSTM with Self-Attention [0.0]
We propose a semi-supervised self-learning method in which a sentiment analysis is acquired by some state-of-the-art pretrained models.
Our learning model is trained in a semi-supervised fashion and incorporates LSTM with self-attention layers.
We benchmark our model on a dataset with 20,000 news content along with their feedback, which shows better performance in precision, recall, and measures compared to competitive methods in fake news detection.
arXiv Detail & Related papers (2024-07-27T20:00:10Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Automated Fake News Detection using cross-checking with reliable sources [0.0]
We use natural human behavior to cross-check new information with reliable sources.
We implement this for Twitter and build a model that flags fake tweets.
Our implementation of this approach gives a $70%$ accuracy which outperforms other generic fake-news classification models.
arXiv Detail & Related papers (2022-01-01T00:59:58Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Fake News Spreader Detection on Twitter using Character N-Grams.
Notebook for PAN at CLEF 2020 [0.0]
This notebook describes our profiling system for the fake news detection task on Twitter.
We conduct different feature extraction techniques and learning experiments from a multilingual perspective.
Our models achieve an overall accuracy of 73% and 79% on the English and Spanish official test set.
arXiv Detail & Related papers (2020-09-29T08:32:32Z) - Modeling the spread of fake news on Twitter [2.7910505923792637]
We propose a point process model of the spread of fake news on Twitter.
We show that the proposed model is superior to the current state-of-the-art methods in accurately predicting the evolution of the spread of a fake news item.
The proposed model contributes to understanding the dynamics of the spread of fake news on social media.
arXiv Detail & Related papers (2020-07-28T08:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.