Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
- URL: http://arxiv.org/abs/2506.17878v1
- Date: Sun, 22 Jun 2025 02:39:27 GMT
- Title: Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval
- Authors: Tam Trinh, Manh Nguyen, Truong-Son Hy,
- Abstract summary: The rapid spread of misinformation in the digital era poses significant challenges to public discourse.<n>Traditional human-led fact-checking methods, while credible, struggle with the volume and velocity of online content.<n>This paper proposes a novel multi-agent system for automated fact-checking that enhances accuracy, efficiency, and explainability.
- Score: 1.515687944002438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid spread of misinformation in the digital era poses significant challenges to public discourse, necessitating robust and scalable fact-checking solutions. Traditional human-led fact-checking methods, while credible, struggle with the volume and velocity of online content, prompting the integration of automated systems powered by Large Language Models (LLMs). However, existing automated approaches often face limitations, such as handling complex claims, ensuring source credibility, and maintaining transparency. This paper proposes a novel multi-agent system for automated fact-checking that enhances accuracy, efficiency, and explainability. The system comprises four specialized agents: an Input Ingestion Agent for claim decomposition, a Query Generation Agent for formulating targeted subqueries, an Evidence Retrieval Agent for sourcing credible evidence, and a Verdict Prediction Agent for synthesizing veracity judgments with human-interpretable explanations. Evaluated on benchmark datasets (FEVEROUS, HOVER, SciFact), the proposed system achieves a 12.3% improvement in Macro F1-score over baseline methods. The system effectively decomposes complex claims, retrieves reliable evidence from trusted sources, and generates transparent explanations for verification decisions. Our approach contributes to the growing field of automated fact-checking by providing a more accurate, efficient, and transparent verification methodology that aligns with human fact-checking practices while maintaining scalability for real-world applications. Our source code is available at https://github.com/HySonLab/FactAgent
Related papers
- Toward Verifiable Misinformation Detection: A Multi-Tool LLM Agent Framework [0.5999777817331317]
This research proposes an innovative verifiable misinformation detection LLM agent.<n>The agent actively verifies claims through dynamic interaction with diverse web sources.<n>It assesses information source credibility, synthesizes evidence, and provides a complete verifiable reasoning process.
arXiv Detail & Related papers (2025-08-05T05:15:03Z) - Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG [51.120170062795566]
We propose Divide-Then-Align (DTA) to endow RAG systems with the ability to respond with "I don't know" when the query is out of the knowledge boundary.<n>DTA balances accuracy with appropriate abstention, enhancing the reliability and trustworthiness of retrieval-augmented systems.
arXiv Detail & Related papers (2025-05-27T08:21:21Z) - Multi-agent Systems for Misinformation Lifecycle : Detection, Correction And Source Identification [0.0]
This paper introduces a novel multi-agent framework that covers the complete misinformation lifecycle.<n>In contrast to single-agent or monolithic architectures, our approach employs five specialized agents.
arXiv Detail & Related papers (2025-05-23T06:05:56Z) - A Generative-AI-Driven Claim Retrieval System Capable of Detecting and Retrieving Claims from Social Media Platforms in Multiple Languages [1.3331869040581863]
This research introduces an approach that retrieves previously fact-checked claims, evaluates their relevance to a given input, and provides supplementary information to support fact-checkers.<n>Our method employs large language models (LLMs) to filter irrelevant fact-checks and generate concise summaries and explanations.<n>Our results demonstrate that LLMs are able to filter out many irrelevant fact-checks and, therefore, reduce effort and streamline the fact-checking process.
arXiv Detail & Related papers (2025-04-29T11:49:05Z) - Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking [7.946359845249688]
Large Language Models (LLMs) have shown strong performance across fact-checking tasks.<n>This paper investigates whether generative agents can meaningfully contribute to fact-checking tasks traditionally reserved for human crowds.<n>Agent crowds outperform human crowds in truthfulness classification, exhibit higher internal consistency, and show reduced susceptibility to social and cognitive biases.
arXiv Detail & Related papers (2025-04-24T18:49:55Z) - VerifiAgent: a Unified Verification Agent in Language Model Reasoning [10.227089771963943]
We propose a unified verification agent that integrates two levels of verification: meta-verification and tool-based adaptive verification.<n>VerifiAgent autonomously selects appropriate verification tools based on the reasoning type.<n>It can be effectively applied to inference scaling, achieving better results with fewer generated samples and costs.
arXiv Detail & Related papers (2025-04-01T04:05:03Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - ExClaim: Explainable Neural Claim Verification Using Rationalization [8.369720566612111]
ExClaim attempts to provide an explainable claim verification system with foundational evidence.
Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim.
Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes.
arXiv Detail & Related papers (2023-01-21T08:26:27Z) - GERE: Generative Evidence Retrieval for Fact Verification [57.78768817972026]
We propose GERE, the first system that retrieves evidences in a generative fashion.
The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-12T03:49:35Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.