Agentic Multi-Persona Framework for Evidence-Aware Fake News Detection
- URL: http://arxiv.org/abs/2512.21039v1
- Date: Wed, 24 Dec 2025 08:06:52 GMT
- Title: Agentic Multi-Persona Framework for Evidence-Aware Fake News Detection
- Authors: Roopa Bukke, Soumya Pandey, Suraj Kumar, Soumi Chattopadhyay, Chandranath Adak,
- Abstract summary: AMPEND-LS is an agentic multi-persona evidence-grounded framework for multimodal fake news detection.<n>It integrates textual, visual, and contextual signals through a structured reasoning pipeline powered by LLMs.<n>Experiments show that AMPEND-LS consistently outperformed state-of-the-art baselines in accuracy, F1 score, and robustness.<n>This work advances the development of adaptive, explainable, and evidence-aware systems for safeguarding online information integrity.
- Score: 0.7534418099163723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid proliferation of online misinformation poses significant risks to public trust, policy, and safety, necessitating reliable automated fake news detection. Existing methods often struggle with multimodal content, domain generalization, and explainability. We propose AMPEND-LS, an agentic multi-persona evidence-grounded framework with LLM-SLM synergy for multimodal fake news detection. AMPEND-LS integrates textual, visual, and contextual signals through a structured reasoning pipeline powered by LLMs, augmented with reverse image search, knowledge graph paths, and persuasion strategy analysis. To improve reliability, we introduce a credibility fusion mechanism combining semantic similarity, domain trustworthiness, and temporal context, and a complementary SLM classifier to mitigate LLM uncertainty and hallucinations. Extensive experiments across three benchmark datasets demonstrate that AMPEND-LS consistently outperformed state-of-the-art baselines in accuracy, F1 score, and robustness. Qualitative case studies further highlight its transparent reasoning and resilience against evolving misinformation. This work advances the development of adaptive, explainable, and evidence-aware systems for safeguarding online information integrity.
Related papers
- DIVER: Dynamic Iterative Visual Evidence Reasoning for Multimodal Fake News Detection [6.225860651499494]
Multimodal fake news detection is crucial for mitigating adversarial misinformation.<n>We propose DIVER (Dynamic Iterative Visual Evidence Reasoning), a framework grounded in a progressive, evidence-driven reasoning paradigm.<n>Experiments on Weibo, Weibo21, and GossipCop demonstrate that DIVER outperforms state-of-the-art baselines by an average of 2.72%.
arXiv Detail & Related papers (2026-01-12T04:01:33Z) - Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking [64.97768177044355]
Large Language Models (LLMs) are increasingly deployed in real-world fact-checking systems.<n>We present FactArena, a fully automated arena-style evaluation framework.<n>Our analyses reveal significant discrepancies between static claim-verification accuracy and end-to-end fact-checking competence.
arXiv Detail & Related papers (2026-01-06T02:51:56Z) - ZoFia: Zero-Shot Fake News Detection with Entity-Guided Retrieval and Multi-LLM Interaction [14.012874564599272]
ZoFia is a novel two-stage zero-shot fake news detection framework.<n>First, we introduce Hierarchical Salience to quantify the importance of entities in the news content.<n>We then propose the SC-MMR algorithm to effectively select an informative and diverse set of keywords.
arXiv Detail & Related papers (2025-11-03T03:29:42Z) - Enhancing Factual Accuracy and Citation Generation in LLMs via Multi-Stage Self-Verification [41.99844472131922]
This research introduces VeriFact-CoT, a novel method designed to address the pervasive issues of hallucination and the absence of credible citation sources in Large Language Models (LLMs)<n>By incorporating a multi-stage mechanism of 'fact verification-reflection-citation integration,' VeriFact-CoT empowers LLMs to critically self-examine and revise their intermediate reasoning steps and final answers.
arXiv Detail & Related papers (2025-09-06T15:07:59Z) - Understanding and Benchmarking the Trustworthiness in Multimodal LLMs for Video Understanding [59.50808215134678]
This study introduces Trust-videoLLMs, a first comprehensive benchmark evaluating 23 state-of-the-art videoLLMs.<n>Results reveal significant limitations in dynamic scene comprehension, cross-modal resilience and real-world risk mitigation.
arXiv Detail & Related papers (2025-06-14T04:04:54Z) - Debunk and Infer: Multimodal Fake News Detection via Diffusion-Generated Evidence and LLM Reasoning [34.75988591416631]
We propose a Debunk-and-Infer framework for Fake News Detection.<n>DIFND integrates the generative strength of conditional diffusion models with the collaborative reasoning capabilities of multimodal large language models.<n>Experiments on the FakeSV and FVC datasets show that DIFND not only outperforms existing approaches but also delivers trustworthy decisions.
arXiv Detail & Related papers (2025-06-11T09:08:43Z) - MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs [66.14178164421794]
We introduce MetaFaith, a novel prompt-based calibration approach inspired by human metacognition.<n>We show that MetaFaith robustly improves faithful calibration across diverse models and task domains, enabling up to 61% improvement in faithfulness.
arXiv Detail & Related papers (2025-05-30T17:54:08Z) - Challenges and Innovations in LLM-Powered Fake News Detection: A Synthesis of Approaches and Future Directions [0.0]
pervasiveness of the dissemination of fake news through social media platforms poses critical risks to the trust of the general public.<n>Recent works include powering the detection using large language model advances in multimodal frameworks.<n>The review further identifies critical gaps in adaptability to dynamic social media trends, real-time, and cross-platform detection capabilities.
arXiv Detail & Related papers (2025-02-01T06:56:17Z) - MAD-Sherlock: Multi-Agent Debate for Visual Misinformation Detection [36.12673167913763]
We introduce MAD-Sherlock, a multi-agent debate system for out-of-context misinformation detection.<n> MAD-Sherlock frames detection as a multi-agent debate, reflecting the diverse and conflicting discourse found online.<n>Our framework is domain- and time-agnostic, requiring no finetuning, yet achieves state-of-the-art accuracy with in-depth explanations.
arXiv Detail & Related papers (2024-10-26T10:34:22Z) - Dynamic Analysis and Adaptive Discriminator for Fake News Detection [59.41431561403343]
We propose a Dynamic Analysis and Adaptive Discriminator (DAAD) approach for fake news detection.<n>For knowledge-based methods, we introduce the Monte Carlo Tree Search algorithm to leverage the self-reflective capabilities of large language models.<n>For semantic-based methods, we define four typical deceit patterns to reveal the mechanisms behind fake news creation.
arXiv Detail & Related papers (2024-08-20T14:13:54Z) - TRACE: TRansformer-based Attribution using Contrastive Embeddings in LLMs [50.259001311894295]
We propose a novel TRansformer-based Attribution framework using Contrastive Embeddings called TRACE.
We show that TRACE significantly improves the ability to attribute sources accurately, making it a valuable tool for enhancing the reliability and trustworthiness of large language models.
arXiv Detail & Related papers (2024-07-06T07:19:30Z) - MultiTrust: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models [51.19622266249408]
MultiTrust is the first comprehensive and unified benchmark on the trustworthiness of MLLMs.<n>Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts.<n>Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks.
arXiv Detail & Related papers (2024-06-11T08:38:13Z) - A Survey on Detection of LLMs-Generated Content [97.87912800179531]
The ability to detect LLMs-generated content has become of paramount importance.
We aim to provide a detailed overview of existing detection strategies and benchmarks.
We also posit the necessity for a multi-faceted approach to defend against various attacks.
arXiv Detail & Related papers (2023-10-24T09:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.