TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking
- URL: http://arxiv.org/abs/2505.07891v2
- Date: Sun, 22 Jun 2025 15:39:02 GMT
- Title: TrumorGPT: Graph-Based Retrieval-Augmented Large Language Model for Fact-Checking
- Authors: Ching Nam Hang, Pei-Duo Yu, Chee Wei Tan,
- Abstract summary: TrumorGPT is a novel generative artificial intelligence solution designed for fact-checking in the health domain.<n>It aims to distinguish "trumors", which are health-related rumors that turn out to be true.<n>TrumorGPT incorporates graph-based retrieval-augmented generation (GraphRAG) to address the hallucination issue.
- Score: 2.3704813250344436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the age of social media, the rapid spread of misinformation and rumors has led to the emergence of infodemics, where false information poses a significant threat to society. To combat this issue, we introduce TrumorGPT, a novel generative artificial intelligence solution designed for fact-checking in the health domain. TrumorGPT aims to distinguish "trumors", which are health-related rumors that turn out to be true, providing a crucial tool in differentiating between mere speculation and verified facts. This framework leverages a large language model (LLM) with few-shot learning for semantic health knowledge graph construction and semantic reasoning. TrumorGPT incorporates graph-based retrieval-augmented generation (GraphRAG) to address the hallucination issue common in LLMs and the limitations of static training data. GraphRAG involves accessing and utilizing information from regularly updated semantic health knowledge graphs that consist of the latest medical news and health information, ensuring that fact-checking by TrumorGPT is based on the most recent data. Evaluating with extensive healthcare datasets, TrumorGPT demonstrates superior performance in fact-checking for public health claims. Its ability to effectively conduct fact-checking across various platforms marks a critical step forward in the fight against health-related misinformation, enhancing trust and accuracy in the digital information age.
Related papers
- Large Language Models' Varying Accuracy in Recognizing Risk-Promoting and Health-Supporting Sentiments in Public Health Discourse: The Cases of HPV Vaccination and Heated Tobacco Products [2.0618817976970103]
Large Language Models (LLMs) have gained attention as a powerful technology, yet their accuracy and feasibility in capturing different opinions on health issues are largely unexplored.<n>This research examines how accurate the three prominent LLMs are in detecting risk-promoting versus health-supporting sentiments.<n>Specifically, models often show higher accuracy for risk-promoting sentiment on Facebook, whereas health-supporting messages on Twitter are more accurately detected.
arXiv Detail & Related papers (2025-07-06T11:57:02Z) - HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation [68.4316501012718]
HealthGPT is a powerful Medical Large Vision-Language Model (Med-LVLM)<n>It integrates medical visual comprehension and generation capabilities within a unified autoregressive paradigm.
arXiv Detail & Related papers (2025-02-14T00:42:36Z) - Enhancing Health Information Retrieval with RAG by Prioritizing Topical Relevance and Factual Accuracy [0.7673339435080445]
This paper introduces a solution driven by Retrieval-Augmented Generation (RAG) to enhance the retrieval of health-related documents grounded in scientific evidence.<n>In particular, we propose a three-stage model: in the first stage, the user's query is employed to retrieve topically relevant passages with associated references from a knowledge base constituted by scientific literature.<n>In the second stage, these passages, alongside the initial query, are processed by LLMs to generate a contextually relevant rich text (GenText)<n>In the last stage, the documents to be retrieved are evaluated and ranked both from the point of
arXiv Detail & Related papers (2025-02-07T05:19:13Z) - Epidemiology-informed Network for Robust Rumor Detection [59.89351792706995]
We propose a novel Epidemiology-informed Network (EIN) that integrates epidemiological knowledge to enhance performance.<n>To adapt epidemiology theory to rumor detection, it is expected that each users stance toward the source information will be annotated.<n>Our experimental results demonstrate that the proposed EIN not only outperforms state-of-the-art methods on real-world datasets but also exhibits enhanced robustness across varying tree depths.
arXiv Detail & Related papers (2024-11-20T00:43:32Z) - Medical Graph RAG: Towards Safe Medical Large Language Model via Graph Retrieval-Augmented Generation [9.286509119104563]
We introduce a novel graph-based Retrieval-Augmented Generation framework specifically designed for the medical domain, called MedGraphRAG.
Our approach is validated on 9 medical Q&A benchmarks, 2 health fact-checking benchmarks, and one collected dataset testing long-form generation.
arXiv Detail & Related papers (2024-08-08T03:11:12Z) - HRDE: Retrieval-Augmented Large Language Models for Chinese Health Rumor Detection and Explainability [6.800433977880405]
This paper builds a dataset containing 1.12 million health-related rumors (HealthRCN) through web scraping of common health-related questions.
We propose retrieval-augmented large language models for Chinese health rumor detection and explainability (HRDE)
arXiv Detail & Related papers (2024-06-30T11:27:50Z) - The Perils & Promises of Fact-checking with Large Language Models [55.869584426820715]
Large Language Models (LLMs) are increasingly trusted to write academic papers, lawsuits, and news articles.
We evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions.
Our results show the enhanced prowess of LLMs when equipped with contextual information.
While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy.
arXiv Detail & Related papers (2023-10-20T14:49:47Z) - Fact-Checking Generative AI: Ontology-Driven Biological Graphs for Disease-Gene Link Verification [45.65374554914359]
We aim to achieve fact-checking of the knowledge embedded in biological graphs that were contrived from ChatGPT contents.
We adopted a biological networks approach that enables the systematic interrogation of ChatGPT's linked entities.
This study demonstrated high accuracy of aggregate disease-gene links relationships found in ChatGPT-generated texts.
arXiv Detail & Related papers (2023-08-07T22:13:30Z) - HuatuoGPT, towards Taming Language Model to Be a Doctor [67.96794664218318]
HuatuoGPT is a large language model (LLM) for medical consultation.
We leverage both textitdistilled data from ChatGPT and textitreal-world data from doctors in the supervised fine-tuned stage.
arXiv Detail & Related papers (2023-05-24T11:56:01Z) - DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4 [80.36535668574804]
We develop a novel GPT4-enabled de-identification framework (DeID-GPT")
Our developed DeID-GPT showed the highest accuracy and remarkable reliability in masking private information from the unstructured medical text.
This study is one of the earliest to utilize ChatGPT and GPT-4 for medical text data processing and de-identification.
arXiv Detail & Related papers (2023-03-20T11:34:37Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Drink Bleach or Do What Now? Covid-HeRA: A Study of Risk-Informed Health
Decision Making in the Presence of COVID-19 Misinformation [23.449057978351945]
We frame health misinformation as a risk assessment task.
We study the severity of each misinformation story and how readers perceive this severity.
We evaluate several traditional and state-of-the-art models and show there is a significant gap in performance.
arXiv Detail & Related papers (2020-10-17T08:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.