FakeWatch: A Framework for Detecting Fake News to Ensure Credible Elections
- URL: http://arxiv.org/abs/2403.09858v2
- Date: Sat, 4 May 2024 18:53:38 GMT
- Title: FakeWatch: A Framework for Detecting Fake News to Ensure Credible Elections
- Authors: Shaina Raza, Tahniat Khan, Veronica Chatrath, Drai Paulen-Patterson, Mizanur Rahman, Oluwanifemi Bamgbose,
- Abstract summary: We introduce FakeWatch, a comprehensive framework carefully designed to detect fake news.
Our framework integrates a model hub comprising of both traditional machine learning (ML) techniques, and state-of-the-art Language Models (LMs)
Our objective is to provide the research community with adaptable and precise classification models adept at identifying fake news for the elections agenda.
- Score: 5.15641542196944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In today's technologically driven world, the rapid spread of fake news, particularly during critical events like elections, poses a growing threat to the integrity of information. To tackle this challenge head-on, we introduce FakeWatch, a comprehensive framework carefully designed to detect fake news. Leveraging a newly curated dataset of North American election-related news articles, we construct robust classification models. Our framework integrates a model hub comprising of both traditional machine learning (ML) techniques, and state-of-the-art Language Models (LMs) to discern fake news effectively. Our objective is to provide the research community with adaptable and precise classification models adept at identifying fake news for the elections agenda. Quantitative evaluations of fake news classifiers on our dataset reveal that, while state-of-the-art LMs exhibit a slight edge over traditional ML models, classical models remain competitive due to their balance of accuracy and computational efficiency. Additionally, qualitative analyses shed light on patterns within fake news articles. We provide our labeled data at https://huggingface.co/datasets/newsmediabias/fake_news_elections_labelled_data and model https://huggingface.co/newsmediabias/FakeWatch for reproducibility and further research.
Related papers
- FakeWatch ElectionShield: A Benchmarking Framework to Detect Fake News
for Credible US Elections [5.861836496977495]
We introduce FakeWatch ElectionShield, an innovative framework carefully designed to detect fake news.
We have created a novel dataset of North American election-related news articles through a blend of advanced language models (LMs) and thorough human verification.
Our goal is to provide the research community with adaptable and accurate classification models in recognizing the dynamic nature of misinformation.
arXiv Detail & Related papers (2023-11-27T21:01:21Z) - Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Multiverse: Multilingual Evidence for Fake News Detection [71.51905606492376]
Multiverse is a new feature based on multilingual evidence that can be used for fake news detection.
The hypothesis of the usage of cross-lingual evidence as a feature for fake news detection is confirmed.
arXiv Detail & Related papers (2022-11-25T18:24:17Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Explainable Tsetlin Machine framework for fake news detection with
credibility score assessment [16.457778420360537]
We propose a novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine (TM)
We use the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text.
For evaluation, we conduct experiments on two publicly available datasets, PolitiFact and GossipCop, and demonstrate that the TM framework significantly outperforms previously published baselines by at least $5%$ in terms of accuracy.
arXiv Detail & Related papers (2021-05-19T13:18:02Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Connecting the Dots Between Fact Verification and Fake News Detection [21.564628184287173]
We propose a simple yet effective approach to connect the dots between fact verification and fake news detection.
Our approach makes use of the recent success of fact verification models and enables zero-shot fake news detection.
arXiv Detail & Related papers (2020-10-11T09:28:52Z) - MALCOM: Generating Malicious Comments to Attack Neural Fake News
Detection Models [40.51057705796747]
MALCOM is an end-to-end adversarial comment generation framework to achieve such an attack.
We demonstrate that about 94% and 93.5% of the time on average MALCOM can successfully mislead five of the latest neural detection models.
We also compare our attack model with four baselines across two real-world datasets.
arXiv Detail & Related papers (2020-09-01T01:26:01Z) - A Deep Learning Approach for Automatic Detection of Fake News [47.00462375817434]
We propose two models based on deep learning for solving fake news detection problem in online news contents of multiple domains.
We evaluate our techniques on the two recently released datasets, namely FakeNews AMT and Celebrity for fake news detection.
arXiv Detail & Related papers (2020-05-11T09:07:46Z) - Weak Supervision for Fake News Detection via Reinforcement Learning [34.448503443582396]
We propose a weakly-supervised fake news detection framework, i.e., WeFEND.
The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector.
We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports.
arXiv Detail & Related papers (2019-12-28T21:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.