TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection
- URL: http://arxiv.org/abs/2402.07776v2
- Date: Tue, 28 May 2024 06:14:34 GMT
- Title: TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection
- Authors: Hui Liu, Wenya Wang, Haoru Li, Haoliang Li,
- Abstract summary: We propose a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models.
This is achieved via a dual-system framework that integrates cognition and decision systems.
We present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework.
- Score: 37.394874500480206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of fake news has emerged as a severe societal problem, raising significant interest from industry and academia. While existing deep-learning based methods have made progress in detecting fake news accurately, their reliability may be compromised caused by the non-transparent reasoning processes, poor generalization abilities and inherent risks of integration with large language models (LLMs). To address this challenge, we propose {\methodname}, a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models. This is achieved via a dual-system framework that integrates cognition and decision systems, adhering to the principles above. The cognition system harnesses human expertise to generate logical predicates, which guide LLMs in generating human-readable logic atoms. Meanwhile, the decision system deduces generalizable logic rules to aggregate these atoms, enabling the identification of the truthfulness of the input news across diverse domains and enhancing transparency in the decision-making process. Finally, we present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework. Our implementation is available at \url{https://github.com/less-and-less-bugs/Trust_TELLER}.
Related papers
- FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows" [74.7488607599921]
FaithEval is a benchmark to evaluate the faithfulness of large language models (LLMs) in contextual scenarios.
FaithEval comprises 4.9K high-quality problems in total, validated through a rigorous four-stage context construction and validation framework.
arXiv Detail & Related papers (2024-09-30T06:27:53Z) - Neurosymbolic AI approach to Attribution in Large Language Models [5.3454230926797734]
Neurosymbolic AI (NesyAI) combines the strengths of neural networks with structured symbolic reasoning.
This paper explores how NesyAI frameworks can enhance existing attribution models, offering more reliable, interpretable, and adaptable systems.
arXiv Detail & Related papers (2024-09-30T02:20:36Z) - Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning [1.3003982724617653]
Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning.
This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs.
Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge.
arXiv Detail & Related papers (2024-09-25T18:35:45Z) - DAAD: Dynamic Analysis and Adaptive Discriminator for Fake News Detection [23.17963985187272]
We propose a Dynamic Analysis and Adaptive Discriminator (DAAD) approach for fake news detection.
For knowledge-based methods, we introduce the Monte Carlo Tree Search (MCTS) algorithm.
For semantic-based methods, we define four typical deceit patterns.
arXiv Detail & Related papers (2024-08-20T14:13:54Z) - Interpretable Concept-Based Memory Reasoning [12.562474638728194]
Concept-based Memory Reasoner (CMR) is a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process.
CMR achieves comparable accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.
arXiv Detail & Related papers (2024-07-22T10:32:48Z) - Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors [38.75533934195315]
Large Language Models (LLMs) are known for their remarkable reasoning and generative capabilities.
We introduce a novel, retrieval-augmented LLMs framework--the first of its kind to automatically and strategically extract key evidence from web sources for claim verification.
Our framework ensures the acquisition of sufficient, relevant evidence, thereby enhancing performance.
arXiv Detail & Related papers (2024-03-14T00:35:39Z) - Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense
Reasoning [85.1541170468617]
This paper reconsiders the nature of commonsense reasoning and proposes a novel commonsense reasoning metric, Non-Replacement Confidence (NRC)
Our proposed novel method boosts zero-shot performance on two commonsense reasoning benchmark datasets and further seven commonsense question-answering datasets.
arXiv Detail & Related papers (2022-08-23T14:42:14Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.