Truth Sleuth and Trend Bender: AI Agents to fact-check YouTube videos and influence opinions
- URL: http://arxiv.org/abs/2507.10577v2
- Date: Wed, 16 Jul 2025 13:25:34 GMT
- Title: Truth Sleuth and Trend Bender: AI Agents to fact-check YouTube videos and influence opinions
- Authors: Cécile Logé, Rehan Ghori,
- Abstract summary: This paper develops an AI-powered system that fact-checks claims made in YouTube videos and engages users in the comment section.<n>Our system comprises two main agents: Truth Sleuth and Trend Bender.<n>We demonstrate the system's capabilities through experiments on established benchmark datasets and a real-world deployment on YouTube.
- Score: 0.5261718469769449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misinformation poses a significant threat in today's digital world, often spreading rapidly through platforms like YouTube. This paper introduces a novel approach to combating misinformation by developing an AI-powered system that not only fact-checks claims made in YouTube videos but also actively engages users in the comment section and challenge misleading narratives. Our system comprises two main agents: Truth Sleuth and Trend Bender. Truth Sleuth extracts claims from a YouTube video, uses a Retrieval-Augmented Generation (RAG) approach - drawing on sources like Wikipedia, Google Search, Google FactCheck - to accurately assess their veracity and generates a nuanced and comprehensive report. Through rigorous prompt engineering, Trend Bender leverages this report along with a curated corpus of relevant articles to generate insightful and persuasive comments designed to stimulate a productive debate. With a carefully set up self-evaluation loop, this agent is able to iteratively improve its style and refine its output. We demonstrate the system's capabilities through experiments on established benchmark datasets and a real-world deployment on YouTube, showcasing its potential to engage users and potentially influence perspectives. Our findings highlight the high accuracy of our fact-checking agent, and confirm the potential of AI-driven interventions in combating misinformation and fostering a more informed online space.
Related papers
- Veracity: An Open-Source AI Fact-Checking System [11.476157136162989]
This paper introduces Veracity, an open-source AI system designed to combat misinformation through transparent and accessible fact-checking.<n>Key features include multilingual support, numerical scoring of claim veracity, and an interactive interface inspired by familiar messaging applications.
arXiv Detail & Related papers (2025-06-18T18:24:59Z) - DAVID-XR1: Detecting AI-Generated Videos with Explainable Reasoning [58.70446237944036]
DAVID-X is the first dataset to pair AI-generated videos with detailed defect-level, temporal-spatial annotations and written rationales.<n>We present DAVID-XR1, a video-language model designed to deliver an interpretable chain of visual reasoning.<n>Our results highlight the promise of explainable detection methods for trustworthy identification of AI-generated video content.
arXiv Detail & Related papers (2025-06-13T13:39:53Z) - Computational Fact-Checking of Online Discourse: Scoring scientific accuracy in climate change related news articles [0.14999444543328289]
This work aims to quantify scientific accuracy of online media.<n>By semantifying media of unknown veracity, their statements can be compared against equally processed trusted sources.
arXiv Detail & Related papers (2025-05-12T10:03:15Z) - PropaInsight: Toward Deeper Understanding of Propaganda in Terms of Techniques, Appeals, and Intent [71.20471076045916]
Propaganda plays a critical role in shaping public opinion and fueling disinformation.<n>Propainsight systematically dissects propaganda into techniques, arousal appeals, and underlying intent.<n>Propagaze combines human-annotated data with high-quality synthetic data.
arXiv Detail & Related papers (2024-09-19T06:28:18Z) - HonestBait: Forward References for Attractive but Faithful Headline
Generation [13.456581900511873]
Forward references (FRs) are a writing technique often used for clickbait.
A self-verification process is included during training to avoid spurious inventions.
We present PANCO1, an innovative dataset containing pairs of fake news with verified news for attractive but faithful news headline generation.
arXiv Detail & Related papers (2023-06-26T16:34:37Z) - Deep Breath: A Machine Learning Browser Extension to Tackle Online
Misinformation [0.0]
This paper proposes a novel system for detecting, processing, and warning users about misleading content online.
By training a machine learning model on an existing dataset of 32,000 clickbait news article headlines, the model predicts how sensationalist a headline is.
It interfaces with a web browser extension which constructs a unique content warning notification based on existing design principles.
arXiv Detail & Related papers (2023-01-09T12:43:58Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.