TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection
- URL: http://arxiv.org/abs/2402.07776v2
- Date: Tue, 28 May 2024 06:14:34 GMT
- Title: TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection
- Authors: Hui Liu, Wenya Wang, Haoru Li, Haoliang Li,
- Abstract summary: We propose a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models.
This is achieved via a dual-system framework that integrates cognition and decision systems.
We present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework.
- Score: 37.394874500480206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of fake news has emerged as a severe societal problem, raising significant interest from industry and academia. While existing deep-learning based methods have made progress in detecting fake news accurately, their reliability may be compromised caused by the non-transparent reasoning processes, poor generalization abilities and inherent risks of integration with large language models (LLMs). To address this challenge, we propose {\methodname}, a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models. This is achieved via a dual-system framework that integrates cognition and decision systems, adhering to the principles above. The cognition system harnesses human expertise to generate logical predicates, which guide LLMs in generating human-readable logic atoms. Meanwhile, the decision system deduces generalizable logic rules to aggregate these atoms, enabling the identification of the truthfulness of the input news across diverse domains and enhancing transparency in the decision-making process. Finally, we present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework. Our implementation is available at \url{https://github.com/less-and-less-bugs/Trust_TELLER}.
Related papers
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [314.7991906491166]
Generative Foundation Models (GenFMs) have emerged as transformative tools.
Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.
This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Challenges and Innovations in LLM-Powered Fake News Detection: A Synthesis of Approaches and Future Directions [0.0]
pervasiveness of the dissemination of fake news through social media platforms poses critical risks to the trust of the general public.
Recent works include powering the detection using large language model advances in multimodal frameworks.
The review further identifies critical gaps in adaptability to dynamic social media trends, real-time, and cross-platform detection capabilities.
arXiv Detail & Related papers (2025-02-01T06:56:17Z) - Building Trustworthy AI: Transparent AI Systems via Large Language Models, Ontologies, and Logical Reasoning (TranspNet) [0.7420433640907689]
Growing concerns over the lack of transparency in AI, particularly in high-stakes fields like healthcare and finance, drive the need for explainable and trustworthy systems.
To address this, the paper proposes the TranspNet pipeline, which integrates symbolic AI with Large Language Models.
arXiv Detail & Related papers (2024-11-13T09:40:37Z) - FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows" [74.7488607599921]
FaithEval is a benchmark to evaluate the faithfulness of large language models (LLMs) in contextual scenarios.
FaithEval comprises 4.9K high-quality problems in total, validated through a rigorous four-stage context construction and validation framework.
arXiv Detail & Related papers (2024-09-30T06:27:53Z) - DAAD: Dynamic Analysis and Adaptive Discriminator for Fake News Detection [23.17963985187272]
We propose a Dynamic Analysis and Adaptive Discriminator (DAAD) approach for fake news detection.
For knowledge-based methods, we introduce the Monte Carlo Tree Search (MCTS) algorithm.
For semantic-based methods, we define four typical deceit patterns.
arXiv Detail & Related papers (2024-08-20T14:13:54Z) - Interpretable Concept-Based Memory Reasoning [12.562474638728194]
Concept-based Memory Reasoner (CMR) is a novel CBM designed to provide a human-understandable and provably-verifiable task prediction process.
CMR achieves better accuracy-interpretability trade-offs to state-of-the-art CBMs, discovers logic rules consistent with ground truths, allows for rule interventions, and allows pre-deployment verification.
arXiv Detail & Related papers (2024-07-22T10:32:48Z) - Detect, Investigate, Judge and Determine: A Knowledge-guided Framework for Few-shot Fake News Detection [50.079690200471454]
Few-Shot Fake News Detection (FS-FND) aims to distinguish inaccurate news from real ones in extremely low-resource scenarios.
This task has garnered increased attention due to the widespread dissemination and harmful impact of fake news on social media.
We propose a Dual-perspective Knowledge-guided Fake News Detection (DKFND) model, designed to enhance LLMs from both inside and outside perspectives.
arXiv Detail & Related papers (2024-07-12T03:15:01Z) - Re-Search for The Truth: Multi-round Retrieval-augmented Large Language Models are Strong Fake News Detectors [38.75533934195315]
Large Language Models (LLMs) are known for their remarkable reasoning and generative capabilities.
We introduce a novel, retrieval-augmented LLMs framework--the first of its kind to automatically and strategically extract key evidence from web sources for claim verification.
Our framework ensures the acquisition of sufficient, relevant evidence, thereby enhancing performance.
arXiv Detail & Related papers (2024-03-14T00:35:39Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.