Exposing and Explaining Fake News On-the-Fly
- URL: http://arxiv.org/abs/2405.06668v2
- Date: Thu, 5 Sep 2024 10:07:46 GMT
- Title: Exposing and Explaining Fake News On-the-Fly
- Authors: Francisco de Arriba-Pérez, Silvia García-Méndez, Fátima Leal, Benedita Malheiro, Juan Carlos Burguillo,
- Abstract summary: This work contributes with an explainable and online classification method to recognize fake news in real-time.
The proposed method combines both unsupervised and supervised Machine Learning approaches with online created lexica.
The performance of the proposed solution has been validated with real data sets from Twitter and the results attain 80 % accuracy and macro F-measure.
- Score: 4.278181795494584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms enable the rapid dissemination and consumption of information. However, users instantly consume such content regardless of the reliability of the shared data. Consequently, the latter crowdsourcing model is exposed to manipulation. This work contributes with an explainable and online classification method to recognize fake news in real-time. The proposed method combines both unsupervised and supervised Machine Learning approaches with online created lexica. The profiling is built using creator-, content- and context-based features using Natural Language Processing techniques. The explainable classification mechanism displays in a dashboard the features selected for classification and the prediction confidence. The performance of the proposed solution has been validated with real data sets from Twitter and the results attain 80 % accuracy and macro F-measure. This proposal is the first to jointly provide data stream processing, profiling, classification and explainability. Ultimately, the proposed early detection, isolation and explanation of fake news contribute to increase the quality and trustworthiness of social media contents.
Related papers
- Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in Under-Resourced Languages Using Explainable AI [44.21078435758592]
Misinformation can spread quickly due to the ease of creating and disseminating content.
Traditional approaches to fake news detection often rely solely on content-based features.
We propose a comprehensive approach that integrates social context-based features with news content features.
arXiv Detail & Related papers (2024-10-03T15:49:35Z) - Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Interpretable Fake News Detection with Topic and Deep Variational Models [2.15242029196761]
We focus on fake news detection using interpretable features and methods.
We have developed a deep probabilistic model that integrates a dense representation of textual news.
Our model achieves comparable performance to state-of-the-art competing models.
arXiv Detail & Related papers (2022-09-04T05:31:00Z) - Automated Evidence Collection for Fake News Detection [11.324403127916877]
We propose a novel approach that improves over the current automatic fake news detection approaches.
Our approach extracts supporting evidence from the web articles and then selects appropriate text to be treated as evidence sets.
Our experiments, using both machine learning and deep learning-based methods, help perform an extensive evaluation of our approach.
arXiv Detail & Related papers (2021-12-13T09:38:41Z) - Interpretable Propaganda Detection in News Articles [30.192497301608164]
We propose to detect and to show the use of deception techniques as a way to offer interpretability.
Our interpretable features can be easily combined with pre-trained language models, yielding state-of-the-art results.
arXiv Detail & Related papers (2021-08-29T09:57:01Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Declarative Knowledge in Text and First-Order Logic for
Fine-Grained Propaganda Detection [139.3415751957195]
We study the detection of propagandistic text fragments in news articles.
We introduce an approach to inject declarative knowledge of fine-grained propaganda techniques.
arXiv Detail & Related papers (2020-04-29T13:46:15Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Fake News Detection by means of Uncertainty Weighted Causal Graphs [0.0]
Social networks let people share news that do not necessarily be trust worthy.
Fake news can influence negatively the opinion of people about certain figures, groups or ideas.
It is desirable to design a system that is able to detect and classify information as fake.
arXiv Detail & Related papers (2020-02-04T00:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.