Interpretable Fake News Detection with Topic and Deep Variational Models
- URL: http://arxiv.org/abs/2209.01536v1
- Date: Sun, 4 Sep 2022 05:31:00 GMT
- Title: Interpretable Fake News Detection with Topic and Deep Variational Models
- Authors: Marjan Hosseini, Alireza Javadian Sabet, Suining He, and Derek Aguiar
- Abstract summary: We focus on fake news detection using interpretable features and methods.
We have developed a deep probabilistic model that integrates a dense representation of textual news.
Our model achieves comparable performance to state-of-the-art competing models.
- Score: 2.15242029196761
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing societal dependence on social media and user generated content
for news and information has increased the influence of unreliable sources and
fake content, which muddles public discourse and lessens trust in the media.
Validating the credibility of such information is a difficult task that is
susceptible to confirmation bias, leading to the development of algorithmic
techniques to distinguish between fake and real news. However, most existing
methods are challenging to interpret, making it difficult to establish trust in
predictions, and make assumptions that are unrealistic in many real-world
scenarios, e.g., the availability of audiovisual features or provenance. In
this work, we focus on fake news detection of textual content using
interpretable features and methods. In particular, we have developed a deep
probabilistic model that integrates a dense representation of textual news
using a variational autoencoder and bi-directional Long Short-Term Memory
(LSTM) networks with semantic topic-related features inferred from a Bayesian
admixture model. Extensive experimental studies with 3 real-world datasets
demonstrate that our model achieves comparable performance to state-of-the-art
competing models while facilitating model interpretability from the learned
topics. Finally, we have conducted model ablation studies to justify the
effectiveness and accuracy of integrating neural embeddings and topic features
both quantitatively by evaluating performance and qualitatively through
separability in lower dimensional embeddings.
Related papers
- Modality Interactive Mixture-of-Experts for Fake News Detection [13.508494216511094]
We present Modality Interactive Mixture-of-Experts for Fake News Detection (MIMoE-FND)
MIMoE-FND is a novel hierarchical Mixture-of-Experts framework designed to enhance multimodal fake news detection.
We evaluate our approach on three real-world benchmarks spanning two languages, demonstrating its superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-01-21T16:49:00Z) - A Hybrid Attention Framework for Fake News Detection with Large Language Models [0.0]
We propose a novel framework to identify and classify fake news by integrating textual statistical features and deep semantic features.
Our approach utilizes the contextual understanding capability of the large language model for text analysis.
Our model significantly outperforms existing methods, with a 1.5% improvement in F1 score.
arXiv Detail & Related papers (2025-01-21T08:26:20Z) - A Self-Learning Multimodal Approach for Fake News Detection [35.98977478616019]
We introduce a self-learning multimodal model for fake news classification.
The model leverages contrastive learning, a robust method for feature extraction that operates without requiring labeled data.
Our experimental results on a public dataset demonstrate that the proposed model outperforms several state-of-the-art classification approaches.
arXiv Detail & Related papers (2024-12-08T07:41:44Z) - On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [49.60774626839712]
multimodal generative models have sparked critical discussions on their fairness, reliability, and potential for misuse.
We propose an evaluation framework designed to assess model reliability through their responses to perturbations in the embedding space.
Our method lays the groundwork for detecting unreliable, bias-injected models and retrieval of bias provenance.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in Under-Resourced Languages Using Explainable AI [44.21078435758592]
Misinformation can spread quickly due to the ease of creating and disseminating content.
Traditional approaches to fake news detection often rely solely on content-based features.
We propose a comprehensive approach that integrates social context-based features with news content features.
arXiv Detail & Related papers (2024-10-03T15:49:35Z) - Capturing Pertinent Symbolic Features for Enhanced Content-Based
Misinformation Detection [0.0]
The detection of misleading content presents a significant hurdle due to its extreme linguistic and domain variability.
This paper analyzes the linguistic attributes that characterize this phenomenon and how representative of such features some of the most popular misinformation datasets are.
We demonstrate that the appropriate use of pertinent symbolic knowledge in combination with neural language models is helpful in detecting misleading content.
arXiv Detail & Related papers (2024-01-29T16:42:34Z) - Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis [89.04041100520881]
This research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image.
We develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
arXiv Detail & Related papers (2023-05-25T15:26:13Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - InfoBERT: Improving Robustness of Language Models from An Information
Theoretic Perspective [84.78604733927887]
Large-scale language models such as BERT have achieved state-of-the-art performance across a wide range of NLP tasks.
Recent studies show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks.
We propose InfoBERT, a novel learning framework for robust fine-tuning of pre-trained language models.
arXiv Detail & Related papers (2020-10-05T20:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.